Janrs.com | 杨建勇

  • 首页
  • 站点地图
  • 所有文章
  • 加入学习群
Janrs.com | 杨建勇
Go/Kubernetes/Istio/CloudNative
  1. 首页
  2. k8s/云原生
  3. 正文

Debian10 部署kubeadm教程02 - 部署master节点

2022年5月19日 95点热度 676人点赞 0条评论

[!TIP]
初始化 master 节点

转载请注明出处:https://janrs.com


初始化 master 节点

查看所需镜像以及版本

kubeadm config images list --kubernetes-version v1.24.8

根据上面显示的镜像版本下载阿里的镜像


ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/coredns:v1.8.6 &&
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/etcd:3.5.5-0 &&
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/pause:3.7 &&
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/kube-proxy:v1.24.8 &&
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.8 &&
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.8 &&
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.8 &&

ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/coredns:v1.8.6 k8s.gcr.io/coredns/coredns:v1.8.6 &&
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/etcd:3.5.5-0 k8s.gcr.io/etcd:3.5.5-0 &&
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.8 k8s.gcr.io/kube-apiserver:v1.24.8 &&
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/kube-proxy:v1.24.8 k8s.gcr.io/kube-proxy:v1.24.8 &&
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.8 k8s.gcr.io/kube-scheduler:v1.24.8 &&
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/pause:3.7 k8s.gcr.io/pause:3.7  &&
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.8 k8s.gcr.io/kube-controller-manager:v1.24.8 &&

ctr -n k8s.io i rm registry.aliyuncs.com/google_containers/coredns:v1.8.6 &&
ctr -n k8s.io i rm registry.aliyuncs.com/google_containers/etcd:3.5.5-0 &&
ctr -n k8s.io i rm registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.8 &&
ctr -n k8s.io i rm registry.aliyuncs.com/google_containers/kube-proxy:v1.24.8 &&
ctr -n k8s.io i rm registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.8 &&
ctr -n k8s.io i rm registry.aliyuncs.com/google_containers/pause:3.7 &&
ctr -n k8s.io i rm registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.8

配置审计

[ ! -d etc/kubernetes ] && mkdir -p /etc/kubernetes
cat << EOF > /etc/kubernetes/audit-policy.yaml
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
EOF

创建配置文件

cat <<EOF > /etc/kubernetes/kubeadm-config.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
ipvs:
  # ipvs 配置
  minSyncPeriod: 5s
  syncPeriod: 5s
  scheduler: wrr
---
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 8o5o52.qzpj42w0j4mrdug4
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.16.222.121
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  - effect: NoSchedule
    key: node-role.kubernetes.io/control-plane
---
apiServer:
  certSANs:
  - 127.0.0.1
  - 172.16.222.121
  extraArgs:
    audit-log-maxage: "20"
    audit-log-maxbackup: "10"
    audit-log-maxsize: "100"
    audit-log-path: /var/log/kube-audit/audit.log
    audit-policy-file: /etc/kubernetes/audit-policy.yaml
    event-ttl: 720h
    service-node-port-range: 30000-50000
  extraVolumes:
  - hostPath: /etc/kubernetes/audit-policy.yaml
    mountPath: /etc/kubernetes/audit-policy.yaml
    name: audit-config
    pathType: File
    readOnly: true
  - hostPath: /var/log/kube-audit
    mountPath: /var/log/kube-audit
    name: audit-log
    pathType: DirectoryOrCreate
  - hostPath: /etc/localtime
    mountPath: /etc/localtime
    name: localtime
    pathType: File
    readOnly: true
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager:
  extraArgs:
    bind-address: 0.0.0.0
    experimental-cluster-signing-duration: 87600h
    feature-gates: RotateKubeletServerCertificate=true
    node-cidr-mask-size: "24"
    node-monitor-grace-period: 10s
    pod-eviction-timeout: 2m
    terminated-pod-gc-threshold: "30"
  extraVolumes:
  - hostPath: /etc/localtime
    mountPath: /etc/localtime
    name: localtime
    pathType: File
    readOnly: true
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
# 如果上面没有手动下载,则使用国内阿里镜像中心
#imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.24.8
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/16
scheduler:
  extraArgs:
    bind-address: 0.0.0.0
  extraVolumes:
  - hostPath: /etc/localtime
    mountPath: /etc/localtime
    name: localtime
    pathType: File
    readOnly: true
EOF

kubeadm 初始化 control plane

初始化之前有 bug

k8s 使用的是 pause:3.7 的镜像,但是 containerd 使用的是 pause:3.6 的镜像,并且会一直从 registry.k8s.io
拉取镜像,导致卡住初始化失败。

解决:需要手动下载镜像并且 tag。执行以下命令解决:

ctr -n k8s.io image pull registry.aliyuncs.com/google_containers/pause:3.6 &&

ctr -n k8s.io image tag registry.aliyuncs.com/google_containers/pause:3.6 registry.k8s.io/pause:3.6 &&

ctr -n k8s.io i rm registry.aliyuncs.com/google_containers/pause:3.6

执行初始化

kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --upload-certs

成功后显示如下信息:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.222.121:6443 --token 8o5o52.qzpj42w0j4mrdug4 \
    --discovery-token-ca-cert-hash sha256:b906fdc04582a1dcd6388bf32329af89e211f1e1c14e3908cd675010eaa11b3a

查看节点信息

kubectl get nodes

显示

NAME           STATUS     ROLES           AGE     VERSION
k8s-master01   NotReady   control-plane   5m50s   v1.24.8

部署 calico 网络插件

[!NOTE]
官方不推荐手动二进制部署,而是采用 operator 部署。

calico-v3.24.1 版本的 yaml 已经复制到我的博客,直接下载部署就行。

部署 operator

cd /etc/kubernetes/ && \
wget https://janrs.com/calico-tigera-operator.yaml && \
kubectl create -f /etc/kubernetes/calico-tigera-operator.yaml

查看 pods

kubectl get pods -n tigera-operator

显示

NAME                               READY   STATUS    RESTARTS   AGE
tigera-operator-74987dd45c-sc9dg   1/1     Running   0          22s

部署 custom-resources

cd /etc/kubernetes/ && \
wget https://janrs.com/calico-custom-resources.yaml

修改网段。修改 ippool 的网段为 /etc/kubernetes/kubeadm-config.yaml 配置的 podSubnet 的值,即:10.244.0.0/16。

...
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 10.244.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()
...

[!NOTE]
以下是参考配置,非必要。

# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # NodeMetricsPort specifies which port calico/node serves prometheus metrics on. By default, metrics are not enabled. If specified, this overrides any FelixConfiguration resources which may exist. If omitted, then prometheus metrics may still be configured through FelixConfiguration.
  nodeMetricsPort: 9127
  # TyphaMetricsPort specifies which port calico/typha serves prometheus metrics on. By default, metrics are not enabled.
  typhaMetricsPort: 9128

  # CalicoKubeControllersDeployment configures the calico-kube-controllers Deployment. If used in conjunction with the deprecated ComponentResources, then these overrides take precedence.
  calicoKubeControllersDeployment:
    spec:
      template:
        spec:
          nodeSelector:
            controller-plane: 'true'
          tolerations:
          - effect: NoSchedule
            operator: Exists

  # ControlPlaneNodeSelector is used to select control plane nodes on which to run Calico components. This is globally applied to all resources created by the operator excluding daemonsets.
  controlPlaneNodeSelector:
    controller-plane: 'true'

  # ControlPlaneTolerations specify tolerations which are then globally applied to all resources created by the operator.
  controlPlaneTolerations:
    - effect: NoSchedule
      operator: Exists

  #typhaDeployment:
    #spec:
      #template:
        #spec:
          #nodeSelector:
            #controller-plane: 'true'
          #tolerations:
          #- effect: NoSchedule
            #operator: Exists

  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 10.244.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

---

# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec:
  apiServerDeployment:
    spec:
      template:
        spec:
          nodeSelector:
            controller-plane: 'true'
          tolerations:
          - effect: NoSchedule
            operator: Exists

执行部署

kubectl create -f /etc/kubernetes/calico-custom-resources.yaml

等待部署成功后查看 nodes

kubectl get nodes

显示

NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   15m   v1.24.8

查看 pods

kubectl get pod -A

显示

NAMESPACE          NAME                                      READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-7db6fc6764-58zvg         1/1     Running   0          44s
calico-apiserver   calico-apiserver-7db6fc6764-vb8sb         1/1     Running   0          44s
calico-system      calico-kube-controllers-b649d69d8-sljl4   1/1     Running   0          119s
calico-system      calico-node-n2lh9                         1/1     Running   0          119s
calico-system      calico-typha-85f4796c86-dxdng             1/1     Running   0          119s
calico-system      csi-node-driver-rfjfz                     2/2     Running   0          82s
kube-system        coredns-6d4b75cb6d-95glf                  1/1     Running   0          5m34s
kube-system        coredns-6d4b75cb6d-t8hrl                  1/1     Running   0          5m34s
kube-system        etcd-k8s-master01                         1/1     Running   0          5m48s
kube-system        kube-apiserver-k8s-master01               1/1     Running   0          5m47s
kube-system        kube-controller-manager-k8s-master01      1/1     Running   0          5m47s
kube-system        kube-proxy-ll4sv                          1/1     Running   0          5m34s
kube-system        kube-scheduler-k8s-master01               1/1     Running   0          5m48s
tigera-operator    tigera-operator-6ff9678cbd-nlp2x          1/1     Running   0          3m42s

至此。初始化 master 节点成功。

本作品采用 知识共享署名-非商业性使用 4.0 国际许可协议 进行许可
标签: k8s kubeadm kubernetes 云原生CloudNative 部署服务
最后更新:2023年3月25日

码仔

Janrs.com

点赞
< 上一篇
下一篇 >

文章评论

razz evil exclaim smile redface biggrin eek confused idea lol mad twisted rolleyes wink cool arrow neutral cry mrgreen drooling persevering
取消回复
有关Go/istio/k8s/云原生直接搜
文章目录
  • 初始化 master 节点
  • 部署 calico 网络插件
    • 至此。初始化 master 节点成功。

COPYRIGHT © 2023 Janrs.com | 杨建勇. ALL RIGHTS RESERVED.

Theme Kratos Made By Seaton Jiang

闽ICP备20002184号