31Calico网络插件的简单使用
环境准备:
1、删除Flannel
2、集群所有node节点拉取所需镜像(具体版本可以依据calico.yaml文件中):
docker pull calico/cni:v3.25.0
docker pull calico/node:v3.25.0
docker pull calico/kube-controllers:v3.25.0
一、安装Calico
1、下载calico.yaml文件
wget https://projectcalico.docs.tigera.io/manifests/calico.yaml
2、修改calico.yaml文件
DaemonSet的tolerations修改为:
tolerations:# Make sure calico-node gets scheduled on all nodes.- effect: NoScheduleoperator: Exists# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- effect: NoExecuteoperator: Exists
Deploy的tolerations修改为:
tolerations:# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyeffect: NoScheduleoperator: Exists- key: node-role.kubernetes.io/mastereffect: NoScheduleoperator: Exists- key: node-role.kubernetes.io/control-planeeffect: NoScheduleoperator: Exists
3、安装calico
[root@master 31]# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
4、查看Calico的运行状态
[root@master 17]# kubectl get pod -n kube-system # 需要等待一定的时候之后,才会状态正常
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-64cc74d646-7hxql 1/1 Running 0 6m45s
calico-node-qmrbm 1/1 Running 0 6m45s
calico-node-tzpx2 1/1 Running 0 6m45s
5、删除worker节点的污点(可选,只要pod能部署到master和worker节点上即可)
[root@worker ~]# kubectl taint node worker node.kubernetes.io/unreachable:NoExecute-
node/worker untainted
6、查看集群各节点的污点情况
[root@master 31]# kubectl describe node|grep Taints
Taints: node-role.kubernetes.io/master:NoSchedule
Taints: <none>
二、创建deploy管理pod
1、创建3个Nginx Pod
[root@master 17]# kubectl create deploy ngx-dep --image=nginx:alpine --replicas=3
deployment.apps/ngx-dep created
2、查看pod状态
[root@master 31]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ngx-dep-bfbb5f64b-8khhz 1/1 Running 0 15s 10.10.171.77 worker <none> <none>
ngx-dep-bfbb5f64b-cxwlb 1/1 Running 0 15m 10.10.219.68 master <none> <none>
ngx-dep-bfbb5f64b-pwhl4 1/1 Running 0 14m 10.10.219.69 master <none> <none>
3、查看pod里的网卡情况
[root@master 31]# kubectl exec ngx-dep-bfbb5f64b-cxwlb -- ip addr
4: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1480 qdisc noqueue state UPlink/ether de:40:75:0a:75:1c brd ff:ff:ff:ff:ff:ffinet 10.10.219.68/32 scope global eth0valid_lft forever preferred_lft foreverinet6 fe80::dc40:75ff:fe0a:751c/64 scope linkvalid_lft forever preferred_lft forever
[root@master 31]# ip addr
19: cali01f5bae7197@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group defaultlink/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 3inet6 fe80::ecee:eeff:feee:eeee/64 scope linkvalid_lft forever preferred_lft forever
[root@master 31]# brctl show --无cni0网桥
bridge name bridge id STP enabled interfaces
br-b64c1192d423 8000.0242b3252498 no
docker0 8000.0242da697d39 no
virbr0 8000.525400e7170e yes virbr0-nic
发现虽然还是有虚拟网卡,但宿主机上的网卡名字变成了cali01f5bae7197@if4,而且并没有连接到“cni0”网桥上。
假设Pod A“10.10.219.68”要访问Pod B“10.10.219.69”,那么查路由表,知道要走“cali1dd15c2b378”这个设备,而它恰好就在Pod B里,所以数据就会直接进Pod B的网卡,省去了网桥的中间步骤。
[root@master 31]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 101 0 0 ens38
10.10.171.64 192.168.190.130 255.255.255.192 UG 0 0 0 tunl0
10.10.219.68 0.0.0.0 255.255.255.255 UH 0 0 0 cali01f5bae7197
10.10.219.69 0.0.0.0 255.255.255.255 UH 0 0 0 cali1dd15c2b378