Kubenetes-基于kubespray 部署集群
Kubenetes-基于kubespray 部署集群
kubespray 是一个部署生产级别的kubernetes集群的github 开源项目,基于ansible-playbook实现自动化部署。github地址: https://github.com/kubernetes-sigs/kubespray,具体支持的比如OS以及插件等功能详见github。
角色 | IP | 系统/内核 | 备注 |
---|---|---|---|
部署机 | 10.18.1.115 | CentOS 7.4/Kernel 3.10.0 | 能翻墙 |
master/node | 10.18.217.135/172.22.4.4 | Ubuntu 22.04 LTS/Kernel 5.15.0 | 能翻墙 |
master/node | 10.18.217.124/172.22.3.50 | Ubuntu 22.04 LTS/Kernel 5.15.0 | 能翻墙 |
master/node | 10.18.217.139/172.22.2.55 | Ubuntu 22.04 LTS/Kernel 5.15.0 | 能翻墙 |
PS:上面主机全部都能翻墙国外下载docker镜像,如果无法翻墙则需要考虑离线的方式。
一、部署机上准备环境
1. SSH root 免密登录
(忽略)
#mkdir /root/inventory/sample
#docker run -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.27.0 bash
#cp -rf inventory/sample /inventory/my-cluster
2. 配置主机清单
#vim /inventory/my-cluster/inventory.ini
#这里kubestrap 将ssh ip和cluster 通信的ip 在inventory里面区分开了,这块设计的挺好
# This inventory describe a HA typology with stacked etcd (== same nodes as control plane)
# and 3 worker nodes
# See https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html
# for tips on building your # inventory
# Configure 'ip' variable to bind kubernetes services on a different ip than the default iface
# We should set etcd_member_name for etcd cluster. The node that are not etcd members do not need to set the value,
# or can set the empty string value.
[kube_control_plane]
# node1 ansible_host=95.54.0.12 # ip=10.3.0.1 etcd_member_name=etcd1
# node2 ansible_host=95.54.0.13 # ip=10.3.0.2 etcd_member_name=etcd2
# node3 ansible_host=95.54.0.14 # ip=10.3.0.3 etcd_member_name=etcd3
node1 ansible_host=10.18.217.135 ip=172.22.4.4 etcd_member_name=etcd1
node2 ansible_host=10.18.217.124 ip=172.22.3.50 etcd_member_name=etcd2
node3 ansible_host=10.18.217.139 ip=172.22.2.55 etcd_member_name=etcd3
[etcd:children]
kube_control_plane
[kube_node]
# node4 ansible_host=95.54.0.15 # ip=10.3.0.4
# node5 ansible_host=95.54.0.16 # ip=10.3.0.5
# node6 ansible_host=95.54.0.17 # ip=10.3.0.6
node1 ansible_host=10.18.217.135 ip=172.22.4.4
node2 ansible_host=10.18.217.124 ip=172.22.3.50
node3 ansible_host=10.18.217.139 ip=172.22.2.55
3. 配置集群信息
#vim /inventory/my-cluster/group_vars/k8s_cluster/k8s-cluster.yml
这里使用了cilium cni插件,同时pod以及service subnet有需要可以更改
# Choose network plugin (cilium, calico, kube-ovn, weave or flannel. Use cni for generic cni plugin)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
#kube_network_plugin: calico
kube_network_plugin: cilium
# Setting multi_networking to true will install Multus: https://github.com/k8snetworkplumbingwg/multus-cni
kube_network_plugin_multus: false
# Kubernetes internal network for services, unused block of space.
kube_service_addresses: 10.233.0.0/18
# internal network. When used, it will assign IP
# addresses from this range to individual pods.
# This network must be unused in your network infrastructure!
kube_pods_subnet: 10.233.64.0/18
4. 部署
#ansible-playbook -i /inventory/my-cluster/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
部署完成,在线部署还是很顺利的~
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane 3m25s v1.31.4
node2 Ready control-plane 3m9s v1.31.4
node3 Ready control-plane 3m5s v1.31.4
5. 启动pod测试
# kubectl apply -f https://k8s.io/examples/application/deployment.yaml
## kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-d556bf558-24w64 1/1 Running 0 33s 10.233.64.149 node1 <none> <none>
nginx-deployment-d556bf558-hc6fc 1/1 Running 0 33s 10.233.66.106 node3 <none> <none>