kubernetes-1.34.2 二级制快速安装部署

2026年01月21日/ 浏览 6

介绍

这是 kubenetes-1.34.2 二进制安装包,其他版本同样可以使用,包含etcd,kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,containerd,coredns,metric-server的证书、配置文件,启动脚本。其中还包括证书的生成脚本、kubeconfig的生成脚本,同时包含了二进制可执行文件,本安装包已经包含了二进制文件,制作好的证书和安装包,可以直接拷贝到安装目录进行安装,也可以重新执行:make_install_package.sh 重新生成。该项目是为了配合k8s-1.34.2二进制的安装,附上的一些配置文件,详细安装教程参考:

https://developer.aliyun.com/article/1696948, k8s管理控制台xkube的安装教程参考:

https://gitee.com/eeenet/xkube

软件架构

该代码中的bin目录下的二进制可执行文件可以自行下载替换,下载地址:

https://github.com/cloudflare/cfssl/releases

https://dl.k8s.io/v1.34.2/kubernetes-server-linux-amd64.tar.gz配置中预设了一些默认参数,参考如下

2.1. etcd配置集群时,三个节点的主机名,分别是:etcd01.my-k8s.local,etcd02.my-k8s.local,etcd03.my-k8s.local

2.2. kube-apiserver的域名是:apiserver.my-k8s.local

2.3. service 网段--service-cluster-ip-range:10.96.0.0/16,pod网段--cluster-cidr:10.244.0.0/16

2.4. 证书过期时间:20年

2.5. coredns的service ip:10.96.0.10目录结构及介绍:├── bin #客户端工具 │ ├── cfssl │ ├── cfssl-certinfo │ ├── cfssljson │ ├── helm │ └── kubectl ├── config │ ├── authorization.yaml │ ├── cert │ │ ├── admin-key.pem │ │ ├── admin.pem │ │ ├── ca-key.pem │ │ ├── ca.pem │ │ ├── etcd-key.pem │ │ ├── etcd.pem │ │ ├── kube-apiserver-key.pem │ │ ├── kube-apiserver.pem │ │ ├── kube-controller-manager-key.pem │ │ ├── kube-controller-manager.pem │ │ ├── kube-scheduler-key.pem │ │ ├── kube-scheduler.pem │ │ ├── proxy-client-key.pem │ │ └── proxy-client.pem │ ├── config.toml │ ├── containerd.service │ ├── create-cert.sh │ ├── create-kubeconfig.sh │ ├── crictl.yaml │ ├── csr │ │ ├── admin-csr.json │ │ ├── ca-config.json │ │ ├── ca-csr.json │ │ ├── etcd-csr.json │ │ ├── kube-apiserver-csr.json │ │ ├── kube-controller-manager-csr.json │ │ ├── kube-scheduler-csr.json │ │ └── proxy-client-csr.json │ ├── etcd01.conf │ ├── etcd02.conf │ ├── etcd03.conf │ ├── etcd.service │ ├── kube-apiserver.conf │ ├── kube-apiserver.service │ ├── kubeconfig │ │ ├── kube-controller-manager.kubeconfig │ │ ├── kube.kubeconfig │ │ ├── kubelet-bootstrap.kubeconfig │ │ ├── kube-scheduler.kubeconfig │ │ └── token.csv │ ├── kube-controller-manager.conf │ ├── kube-controller-manager.service │ ├── kubelet.service │ ├── kubelet.yaml │ ├── kube-scheduler.conf │ ├── kube-scheduler.service │ └── xetcd.sh ├── coredns-v1.13.1.yaml ├── etcd │ ├── bin │ │ ├── etcd │ │ ├── etcdctl │ │ └── etcdutl │ ├── conf │ │ ├── etcd01.conf │ │ ├── etcd02.conf │ │ └── etcd03.conf │ ├── etcd.service │ ├── logs │ └── ssl │ ├── ca-key.pem │ ├── ca.pem │ ├── etcd-key.pem │ └── etcd.pem ├── install.sh ├── make_install_package.sh ├── master │ └── kubernetes │ ├── bin │ │ ├── kube-apiserver │ │ ├── kube-controller-manager │ │ ├── kubelet │ │ └── kube-scheduler │ ├── conf │ │ ├── kube-apiserver.conf │ │ ├── kube-controller-manager.conf │ │ ├── kube-controller-manager.kubeconfig │ │ ├── kubelet-bootstrap.kubeconfig │ │ ├── kubelet.yaml │ │ ├── kube-scheduler.conf │ │ ├── kube-scheduler.kubeconfig │ │ └── token.csv │ ├── kube-apiserver.service │ ├── kube-controller-manager.service │ ├── kubelet.service │ ├── kube-scheduler.service │ ├── logs │ └── ssl │ ├── ca-key.pem │ ├── ca.pem │ ├── kube-apiserver-key.pem │ ├── kube-apiserver.pem │ ├── kube-controller-manager-key.pem │ ├── kube-controller-manager.pem │ ├── kube-scheduler-key.pem │ ├── kube-scheduler.pem │ ├── proxy-client-key.pem │ └── proxy-client.pem ├── metric-server-0.8.0_components.yaml ├── README.en.md ├── README.md └── worker └── kubernetes ├── bin │ └── kubelet ├── conf │ ├── kubelet-bootstrap.kubeconfig │ └── kubelet.yaml ├── kubelet.service ├── logs └── ssl ├── ca-key.pem └── ca.pem

安装教程

1.第一步:下载安装包

git clone https://gitee.com/eeenet/k8s-install cd k8s-install git clone https://gitee.com/eeenet/containerd #由于带个仓库限制,将containerd独立仓库了

2. 第二步:初始化机器

./init.sh

3. 第三步:制作安装包

该步骤是生成证书、kubeconfig文件、并将文件拷贝到不同的安装目录下

./make_install_package.sh

4. 第四步:第三步执行完以后,将完整目录打包,上传到所有k8s的机器上,然后根据角色执行安装,该安装只是将文件拷贝到安装目录。

./install.sh etcd01 #在安装etcd集群的第一台机器执行 ./install.sh etcd02 #在安装etcd集群的第二台机器执行 ./install.sh etcd03 #在安装etcd集群的第三台机器执行 ./install.sh master #在安装k8s集群的master机器执行 ./install.sh worker #在安装k8s集群的worker机器执行

5.配置hosts并启动服务

etcd和master机器都在/etc/hosts配置映射,其中IP需要更改成etcd 和master其中一台机的IP或已经配置打通的apiserver的vip。

192.168.10.185 etcd01.my-k8s.local

192.168.10.186 etcd02.my-k8s.local

192.168.10.187 etcd03.my-k8s.local

192.168.10.185 apiserver.my-k8s.local

worker配置/etc/hosts:

192.168.10.185 apiserver.my-k8s.local

执行kubectl的机器拷贝kubeconfig

[[ -d /root/.kube ]] || mkdir /root/.kube cp config/kubeconfig/kube.kubeconfig /root/.kube/config

6.启动服务并授权

先启动etcd

systemctl start etcd

然后启动master的服务

systemctl start kube-apiserver

systemctl start kube-controller-manager

systemctl start kube-scheduler

最后启动worker的服务,master的机器也需要启动

systemctl start containerd

systemctl start kubelet

在kubectl的机器,或执行make_install_package.sh的那台机器执行授权

cd config kubectl apply -f authorization.yaml kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

7.验证安装情况

7.1.验证etcd安装会将所有etcd列出来

cd config ./xetcd.sh all

显示结果如下:

+----------------------------------+--------+-------------+-------+ | ENDPOINT | HEALTH | TOOK | ERROR | +----------------------------------+--------+-------------+-------+ | https://etcd03.my-k8s.local:2379 | true | 13.709135ms | | | https://etcd02.my-k8s.local:2379 | true | 13.727887ms | | | https://etcd01.my-k8s.local:2379 | true | 13.611583ms | | +----------------------------------+--------+-------------+-------+ +------------------+---------+--------+----------------------------------+----------------------------------+------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------+----------------------------------+----------------------------------+------------+ | 206f11271cff2cca | started | etcd01 | https://etcd01.my-k8s.local:2380 | https://etcd01.my-k8s.local:2379 | false | | 2636113ae997b450 | started | etcd03 | https://etcd03.my-k8s.local:2380 | https://etcd03.my-k8s.local:2379 | false | | a9a64ba8a4b9168a | started | etcd02 | https://etcd02.my-k8s.local:2380 | https://etcd02.my-k8s.local:2379 | false | +------------------+---------+--------+----------------------------------+----------------------------------+------------+ +----------------------------------+------------------+---------+-----------------+---------+--------+-----------------------+--------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+ | ENDPOINT | ID | VERSION | STORAGE VERSION | DB SIZE | IN USE | PERCENTAGE NOT IN USE | QUOTA | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | DOWNGRADE TARGET VERSION | DOWNGRADE ENABLED | +----------------------------------+------------------+---------+-----------------+---------+--------+-----------------------+--------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+ | https://etcd01.my-k8s.local:2379 | 206f11271cff2cca | 3.6.6 | 3.6.0 | 91 MB | 60 MB | 35% | 2.1 GB | true | false | 2 | 578005 | 578005 | | | false | | https://etcd02.my-k8s.local:2379 | a9a64ba8a4b9168a | 3.6.6 | 3.6.0 | 91 MB | 60 MB | 35% | 2.1 GB | false | false | 2 | 578005 | 578005 | | | false | | https://etcd03.my-k8s.local:2379 | 2636113ae997b450 | 3.6.6 | 3.6.0 | 91 MB | 60 MB | 35% | 2.1 GB | false | false | 2 | 578005 | 578005 | | | false | +----------------------------------+------------------+---------+-----------------+---------+--------+-----------------------+--------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+

7.2.验证master组件安装

执行命令:kubectl get cs 显示结果:

Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy ok

7.3.验证worker机器是否正常连接到master

执行:kubectl get csr

结果如下:NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION node-csr-Vu_OpMRVN2CG_m7jQb0m5fBl4_J0Tf90WoeByfNufgU 63s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap <none> Pending

根据上一步的结果逐个执行approve 证书,注意每台节点的证书名称是不一样的:

kubectl certificate approve node-csr-Vu_OpMRVN2CG_m7jQb0m5fBl4_J0Tf90WoeByfNufgU

执行完以后再执行:kubectl get csr,就会显示状态是:Approved,Issued

最后执行:kubectl get node 结果如下:

NAME STATUS ROLES AGE VERSION gt5-wangbikang-test-vm10-185 Ready <none> 2d1h v1.34.2 gt5-wangbikang-test-vm10-186 Ready <none> 2d v1.34.2 gt5-wangbikang-test-vm10-187 Ready <none> 2d v1.34.2 gt5-wangbikang-test-vm10-188 Ready <none> 2d v1.34.2

8. 安装cilium插件

注意反斜杠后边不能有空格,安装完后如果镜像下载失败,可以通过kubectl edit 修改镜像地址为自己的仓库地址,

cilium的镜像可以先下载下来传到自己的镜像仓库,cilium的镜像如下:Image versions cilium quay.io/cilium/cilium:v1.18.4@sha256:49d87af187eeeb9e9e3ec2bc6bd372261a0b5cb2d845659463ba7cc10fe9e45f: 3 cilium-envoy quay.io/cilium/cilium-envoy:v1.34.10-1762597008-ff7ae7d623be00078865cff1b0672cc5d9bfc6d5@sha256:1deb6709afcb5523579bf1abbc3255adf9e354565a88c4a9162c8d9cb1d77ab5: 3 cilium-operator quay.io/cilium/operator-generic:v1.18.4@sha256:1b22b9ff28affdf574378a70dade4ef835b00b080c2ee2418530809dd62c3012: 2 hubble-relay quay.io/cilium/hubble-relay:v1.18.4@sha256:6d350cb1c84b847adb152173debef1f774126c69de21a5921a1e6a23b8779723: 1 hubble-ui quay.io/cilium/hubble-ui-backend:v0.13.3@sha256:db1454e45dc39ca41fbf7cad31eec95d99e5b9949c39daaad0fa81ef29d56953: 1 hubble-ui quay.io/cilium/hubble-ui:v0.13.3@sha256:661d5de7050182d495c6497ff0b007a7a1e379648e60830dd68c4d78ae21761d: 1

安装命令:

helm install cilium cilium/cilium --version 1.18.4 \ --namespace kube-system \ --set routingMode=native \ --set kubeProxyReplacement=true \ --set autoDirectNodeRoutes=true \ --set ipv4NativeRoutingCIDR=10.244.0.0/16 \ --set loadBalancer.mode=hybrid \ --set loadBalancer.acceleration=native \ --set k8sServiceHost=apiserver.my-k8s.local \ --set k8sServicePort=6443 \ --set bpf.datapathMode=netkit \ --set bpf.masquerade=true \ --set bandwidthManager.enabled=true \ --set bandwidthManager.bbr=true \ --set ipam.operator.clusterPoolIPv4PodCIDRList=10.244.0.0/16 \ --set ipam.operator.clusterPoolIPv4MaskSize=24 \ --set prometheus.enabled=true \ --set operator.prometheus.enabled=true \ --set hubble.relay.enabled=true \ --set hubble.ui.enabled=true \ --set hubble.metrics.enabled="{dns,drop,tcp,flow,icmp,port-distribution,http}" \ --set bpf.distributedLRU.enabled=true \ --set bpf.mapDynamicSizeRatio=0.08 \ --set ipv4.enabled=true \ --set enableIPv4BIGTCP=true

安装成功后执行cilium status,结果如下:其中hubble-ui服务需要coredns安装后才会正常,

/¯¯\ /¯¯\__/¯¯\ Cilium: OK \__/¯¯\__/ Operator: OK /¯¯\__/¯¯\ Envoy DaemonSet: OK \__/¯¯\__/ Hubble Relay: OK \__/ ClusterMesh: disabled DaemonSet cilium Desired: 4, Ready: 4/4, Available: 4/4 DaemonSet cilium-envoy Desired: 4, Ready: 4/4, Available: 4/4 Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2 Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1 Deployment hubble-ui Desired: 1, Ready: 1/1, Available: 1/1 Containers: cilium Running: 4 cilium-envoy Running: 4 cilium-operator Running: 2 clustermesh-apiserver hubble-relay Running: 1 hubble-ui Running: 1 Cluster Pods: 6/7 managed by Cilium Helm chart version: 1.18.4 Image versions cilium crpi-44hgz4440mgo9lnt.cn-guangzhou.personal.cr.aliyuncs.com/eeenet/cilium:v1.18.4: 4 cilium-envoy crpi-44hgz4440mgo9lnt.cn-guangzhou.personal.cr.aliyuncs.com/eeenet/cilium-envoy:v1.34.10-1762597008-ff7ae7d623be00078865cff1b0672cc5d9bfc6d5: 4 cilium-operator crpi-44hgz4440mgo9lnt.cn-guangzhou.personal.cr.aliyuncs.com/eeenet/operator-generic:v1.18.4: 2 hubble-relay crpi-44hgz4440mgo9lnt.cn-guangzhou.personal.cr.aliyuncs.com/eeenet/hubble-relay:v1.18.4: 1 hubble-ui crpi-44hgz4440mgo9lnt.cn-guangzhou.personal.cr.aliyuncs.com/eeenet/hubble-ui-backend:v0.13.3: 1 hubble-ui crpi-44hgz4440mgo9lnt.cn-guangzhou.personal.cr.aliyuncs.com/eeenet/hubble-ui:v0.13.3: 1

9.安装coredns

kubectl apply -f coredns-v1.13.1.yaml

10.安装metric-server

kubectl apply -f metric-server-0.8.0_components.yaml

picture loss