Janrs.com | 杨建勇
Janrs.com | 杨建勇

Centos7 kubeadm部署k8s教程02 - 部署master节点

[!TIP]
Centos7 使用 kubeadm 部署 k8s 集群

转载请注明出处:https://janrs.com


CentOS7版本2009
k8s版本v1.23.9
Docker版本docker-ce-v20.10


创建 k8s master 节点

[!NOTE]
可以使用以下命令查看指定 k8s 版本所需的镜像以及版本
kubeadm config images list --kubernetes-version v1.23.9

1.拉取镜像

手动从阿里云拉取镜像


docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.6 && \
docker pull registry.aliyuncs.com/google_containers/etcd:3.5.1-0 && \
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.9 && \
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.23.9 && \
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.9 && \
docker pull registry.aliyuncs.com/google_containers/pause:3.6 && \
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.9

2.初始化

[!NOTE]
初始化的时候需要指定--image-repository为跟上面拉取镜像地址一样的:registry.aliyuncs.com/google_containers
参数--apiserver-advertise-address设置为主机的 IP 地址
参数--pod-network-cidr设置的网段需要记下,使用calico设置网络的时候需要用到

kubeadm init \
--apiserver-advertise-address=172.16.222.231 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.23.9 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12

[!NOTE]
如果初始化失败,并且有执行了初始执行 kubeadm reset -f初始化

执行初始化后显示以下信息表示成功

----
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.216.11:6443 --token 84lnde.auurvv324oghnfat \
    --discovery-token-ca-cert-hash sha256:ed487d1c133d55deda215d212d363c90718eaf6a517bbb691659ca004c04988b
----

按照提示复制配置

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果是 root 用户执行以下命令

export KUBECONFIG=/etc/kubernetes/admin.conf

记录下加入 master 节点的 token

cat >> /home/join.txt <<EOF
kubeadm join 172.16.222.231:6443 --token cs7x2w.vltumsf0z7e0ot89 \
    --discovery-token-ca-cert-hash sha256:f569f224944b3a965d8e5695e1fd791d1693299497ba39820f8fc708c718229a
EOF

3.部署 calico 网络插件

下载配置文件

cd /home && \
wget https://docs.projectcalico.org/manifests/calico.yaml --no-check-certificate

修改配置文件

[!NOTE]
打开 calico.yaml ,在 4434 行后面添加以下代码
value 的值跟--pod-network-cidr=10.244.0.0/16对应
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"

修改位置如下

...
            # The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
            # Set the hostname based on the k8s node name.
            # 在这里添加
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"
            - name: KUBERNETES_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # CNI MTU Config variable
            - name: CNI_MTU
              valueFrom:
...

执行部署

kubectl apply -f calico.yaml

查看 node 情况

[!NOTE]
如果显示为 NotReady,等待 30-60 秒再看一下

kubectl get nodes

显示 Ready 表示 master 节点已经创建成功并且正常运行

NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   15m   v1.23.9

查看 pods 情况

kubectl get pod -n kube-system

显示所有服务为 Running 表示运行正常

NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-66966888c4-2jsjw   1/1     Running   0          2m50s
calico-node-zxqzk                          1/1     Running   0          2m50s
coredns-6d8c4cb4d-cfxrh                    1/1     Running   0          16m
coredns-6d8c4cb4d-gjzp7                    1/1     Running   0          16m
etcd-k8s-master01                          1/1     Running   0          16m
kube-apiserver-k8s-master01                1/1     Running   0          16m
kube-controller-manager-k8s-master01       1/1     Running   0          16m
kube-proxy-trj45                           1/1     Running   0          16m
kube-scheduler-k8s-master01                1/1     Running   0          16m

至此。k8s 创建 master 节点成功。

如果你有任何问题,欢迎在底部留言。或者点击加入微信技术交流群 | 我的GitHub

码仔

文章作者

Janrs.com

发表回复

textsms
account_circle
email

  • 明天会更好

    教程不错。收藏了

    3月前 回复
  • one day day

    coredns-6d8c4cb4d-cfxrh coredns-6d8c4cb4d-gjzp7没有running怎么办

    3月前 回复
  • one day day

    给个面子把刚刚的评论删了

    3月前 回复
  • one day day

    初始化报错
    error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
    [preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
    To see the stack trace of this error execute with –v=5 or higher

    3月前 回复
    • 码仔博主

      @one day day: cpu核数太少了。没有满足k8s安装的最低硬件要求,如果要继续安装,添加参数-ignore-preflight-errors=就行了。
      如果是vmware虚拟机,直接将cpu的核数设置成宿主机一样的。
      问题参考链接:https://stackoverflow.com/questions/60804280/kubadm-init-error-cpus-1-is-less-than-required-2

      3月前 回复

Janrs.com | 杨建勇

Centos7 kubeadm部署k8s教程02 - 部署master节点
[!TIP] Centos7 使用 kubeadm 部署 k8s 集群 转载请注明出处:https://janrs.com CentOS7版本2009 k8s版本v1.23.9 Docker版本docker-ce-v20.10 创建 k8s master 节点 [!NOTE] 可以使…
扫描二维码继续阅读
2022-05-19