K8S官方文档-中文版
https://kubernetes.io/zh-cn/docs/home/
部署K8S集群
1. 安装kubeadm, kubelet, kubectl
# 在Master节点和所有Node节点都执行:
yum -y install kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
systemctl enable kubelet2. 部署Master节点
2-1 同步Docker和Kubelet驱动
说明: 如果不同步驱动, 在2-2中初始化Master节点的kubeadm时可能出现下面问题:
# Master节点初始化可能遇到以下问题(一般出现在下一步):
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.这种情况大多是由于kubeadm启动失败引起的, 一般的解决办法如下:
查看Docker驱动和kubelet的驱动
# 在Master节点执行
[root@k8s-master ~]# docker info | grep Cgroup
Cgroup Driver: cgroupfs
Cgroup Version: 1
[root@k8s-master ~]# cat /var/lib/kubelet/config.yaml | grep cgroup
cgroupDriver: systemd
[root@k8s-master ~]#修改Docker驱动, 和kubelet保持一致
# 在Master节点执行
# 编辑Docker配置文件
vim /etc/docker/daemon.json
# 添加一对配置如下:
"exec-opts":["native.cgroupdriver=systemd"]重启Docker服务, 重置kubeadm
# 在Master节点执行
systemctl daemon-reload
systemctl restart docker
kubeadm reset2-2 初始化Master节点
# 在K8S-Master节点执行:
kubeadm init \
--apiserver-advertise-address=192.168.19.121 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.6 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
# 当看到下面信息, 则集群初始化成功
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.19.121:6443 --token vqcy5d.xvuqo6fhnpzzba5y \
--discovery-token-ca-cert-hash sha256:fd0231d2902f08a126fd2a05587a4af2592a2bd6f5b361b5ebbc8f748066e44a 2-3 保存Master节点的kube配置
# 在K8S-Master节点执行:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config2-4 查看Master节点状态
# 在K8S-Master节点执行:
[root@k8s-master ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane,master 5m52s v1.23.6
[root@k8s-master ~]#2-5 查看Master节点的token
# 在K8S-Master节点执行:
[root@k8s-master ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
vqcy5d.xvuqo6fhnpzzba5y 23h 2024-09-27T01:11:57Z authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
[root@k8s-master ~]#
# 如果token有效期过期, 可以重新创建
[root@k8s-master ~]# kubeadm token create
pdho80.11nwvcmf49lx48q5
[root@k8s-master ~]#2-6 Master节点生成sha256
# 在Master节点执行:
[root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
> openssl dgst -sha256 -hex | sed 's/^.* //'
fd0231d2902f08a126fd2a05587a4af2592a2bd6f5b361b5ebbc8f748066e44a
[root@k8s-master ~]#3. 部署Node节点
3-1 修改Node节点的驱动
# 在所有Node节点都执行:
# 编辑Docker配置文件, 不存在则创建
vim /etc/docker/daemon.json
# 添加一对配置如下:
"exec-opts":["native.cgroupdriver=systemd"]
# 重启Docker服务
systemctl daemon-reload
systemctl restart docker3-2 查看Node节点状态
由于Node节点要加入到Master节点所在的集群, 不能执行kubeadm init
# 在所有Node节点都执行:
[root@k8s-node-1 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since 四 2024-09-26 09:38:43 CST; 38ms ago
Docs: https://kubernetes.io/docs/
Main PID: 17310 (kubelet)
Tasks: 6
Memory: 11.0M
CGroup: /system.slice/kubelet.service
└─17310 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kube...
9月 26 09:38:43 k8s-node-1 systemd[1]: Started kubelet: The Kubern....
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s-node-1 ~]#3-3 将Node节点加入到集群中
命令中的[k8s-master]为Master节点的主机名, 没有配置host文件时可写ip
其中6443端口为固定端口
其中token值为Master节点创建的token
其中hash为Master节点生成的sha256的值, 值前面拼接sha256:
# 在所有Node节点都执行:
[root@k8s-node-1 ~]# kubeadm join k8s-master:6443 --token vqcy5d.xvuqo6fhnpzzba5y --discovery-token-ca-cert-hash sha256:fd0231d2902f08a126fd2a05587a4af2592a2bd6f5b361b5ebbc8f748066e44a
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-node-1 ~]# 3-4 再次查看Node节点状态
由于加入了集群中, Node节点会从Master节点同步kubelet配置信息, kubelet服务可以正常运行
# 在所有Node节点都执行:
[root@k8s-node-1 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since 四 2024-09-26 09:41:49 CST; 2min 28s ago
Docs: https://kubernetes.io/docs/
Main PID: 17507 (kubelet)
Tasks: 15
Memory: 49.4M
CGroup: /system.slice/kubelet.service
└─17507 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.co...
9月 26 09:43:55 k8s-node-1 kubelet[17507]: I0926 09:43:55.856027 17507 cni.go:240] "Unabl....d"
9月 26 09:43:57 k8s-node-1 kubelet[17507]: E0926 09:43:57.217219 17507 kubelet.go:2386] "...ed"
9月 26 09:44:00 k8s-node-1 kubelet[17507]: I0926 09:44:00.857478 17507 cni.go:240] "Unabl....d"
9月 26 09:44:02 k8s-node-1 kubelet[17507]: E0926 09:44:02.258648 17507 kubelet.go:2386] "...ed"
9月 26 09:44:05 k8s-node-1 kubelet[17507]: I0926 09:44:05.858752 17507 cni.go:240] "Unabl....d"
9月 26 09:44:07 k8s-node-1 kubelet[17507]: E0926 09:44:07.274336 17507 kubelet.go:2386] "...ed"
9月 26 09:44:10 k8s-node-1 kubelet[17507]: I0926 09:44:10.859362 17507 cni.go:240] "Unabl....d"
9月 26 09:44:12 k8s-node-1 kubelet[17507]: E0926 09:44:12.289357 17507 kubelet.go:2386] "...ed"
9月 26 09:44:15 k8s-node-1 kubelet[17507]: I0926 09:44:15.860374 17507 cni.go:240] "Unabl....d"
9月 26 09:44:17 k8s-node-1 kubelet[17507]: E0926 09:44:17.303745 17507 kubelet.go:2386] "...ed"
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s-node-1 ~]# 3-5 查看集群状态
# 在Master节点执行:
[root@k8s-master ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane,master 35m v1.23.6
k8s-node-1 NotReady <none> 5m17s v1.23.6
k8s-node-2 NotReady <none> 5m17s v1.23.6
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
etcd-0 Healthy {"health":"true","reason":""}
scheduler Healthy ok
controller-manager Healthy ok
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get pods
No resources found in default namespace.
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d8c4cb4d-75fh5 0/1 Pending 0 36m
coredns-6d8c4cb4d-v67jg 0/1 Pending 0 36m
etcd-k8s-master 1/1 Running 0 36m
kube-apiserver-k8s-master 1/1 Running 0 36m
kube-controller-manager-k8s-master 1/1 Running 0 36m
kube-proxy-brr6v 1/1 Running 0 6m30s
kube-proxy-c69n4 1/1 Running 0 36m
kube-proxy-s8xnp 1/1 Running 0 6m30s
kube-scheduler-k8s-master 1/1 Running 0 36m
[root@k8s-master ~]# 4. 部署CNI网络插件
4-1 下载calico配置文件
# 在Master节点执行:
[root@k8s-master opt]# curl https://calico-v3-25.netlify.app/archive/v3.25/manifests/calico.yaml -O
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 232k 100 232k 0 0 228k 0 0:00:01 0:00:01 --:--:-- 228k
[root@k8s-master opt]#
[root@k8s-master opt]#
[root@k8s-master opt]# ls
calico.yaml cni containerd docker docker-20.10.0.tgz4-2 修改calico配置文件
找到CALICO_IPV4POOL_CIDR配置项
将value修改为Master执行kubeadm init时--pod-network-cidr项的值: 10.244.0.0/16
或将CALICO_IPV4POOL_CIDR的name和value注释掉, 如下
# 在Master节点执行:
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"4-3 应用calico配置
# 在Master节点执行:
[root@k8s-master opt]# kubectl apply -f ./calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
[root@k8s-master opt]#
# 此处需要等待calico镜像拉取完成以及容器创建完成, 可多次查看
[root@k8s-master opt]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-64cc74d646-q9dng 0/1 Pending 0 35s
calico-node-766fp 0/1 Init:0/3 0 35s
calico-node-t457s 0/1 Init:0/3 0 35s
calico-node-wdbvq 0/1 Init:0/3 0 35s
coredns-6d8c4cb4d-cxdqv 0/1 Pending 0 41m
coredns-6d8c4cb4d-lg4jk 0/1 Pending 0 41m
etcd-k8s-master 1/1 Running 0 42m
kube-apiserver-k8s-master 1/1 Running 0 42m
kube-controller-manager-k8s-master 1/1 Running 0 42m
kube-proxy-4mc87 1/1 Running 0 4m7s
kube-proxy-6f97w 1/1 Running 0 41m
kube-proxy-q2dv2 1/1 Running 0 4m32s
kube-scheduler-k8s-master 1/1 Running 0 42m
[root@k8s-master opt]#
[root@k8s-master opt]# kubectl describe po calico-node-766fp -n kube-system
Name: calico-node-766fp
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: k8s-master/192.168.19.121
Start Time: Fri, 27 Sep 2024 09:23:47 +0800
Labels: controller-revision-hash=5968f75c94
k8s-app=calico-node
pod-template-generation=1
Annotations: <none>
Status: Pending
IP: 192.168.19.121
IPs:
IP: 192.168.19.121
Controlled By: DaemonSet/calico-node
Init Containers:
upgrade-ipam:
Container ID:
Image: docker.io/calico/cni:v3.25.0
Image ID:
Port: <none>
Host Port: <none>
Command:
/opt/cni/bin/calico-ipam
-upgrade
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment Variables from:
kubernetes-services-endpoint ConfigMap Optional: true
Environment:
KUBERNETES_NODE_NAME: (v1:spec.nodeName)
CALICO_NETWORKING_BACKEND: <set to the key 'calico_backend' of config map 'calico-config'> Optional: false
Mounts:
/host/opt/cni/bin from cni-bin-dir (rw)
/var/lib/cni/networks from host-local-net-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5tqjg (ro)
install-cni:
Container ID:
Image: docker.io/calico/cni:v3.25.0
Image ID:
Port: <none>
Host Port: <none>
Command:
/opt/cni/bin/install
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment Variables from:
kubernetes-services-endpoint ConfigMap Optional: true
Environment:
CNI_CONF_NAME: 10-calico.conflist
CNI_NETWORK_CONFIG: <set to the key 'cni_network_config' of config map 'calico-config'> Optional: false
KUBERNETES_NODE_NAME: (v1:spec.nodeName)
CNI_MTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false
SLEEP: false
Mounts:
/host/etc/cni/net.d from cni-net-dir (rw)
/host/opt/cni/bin from cni-bin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5tqjg (ro)
mount-bpffs:
Container ID:
Image: docker.io/calico/node:v3.25.0
Image ID:
Port: <none>
Host Port: <none>
Command:
calico-node
-init
-best-effort
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/nodeproc from nodeproc (ro)
/sys/fs from sys-fs (rw)
/var/run/calico from var-run-calico (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5tqjg (ro)
Containers:
calico-node:
Container ID:
Image: docker.io/calico/node:v3.25.0
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Requests:
cpu: 250m
Liveness: exec [/bin/calico-node -felix-live -bird-live] delay=10s timeout=10s period=10s #success=1 #failure=6
Readiness: exec [/bin/calico-node -felix-ready -bird-ready] delay=0s timeout=10s period=10s #success=1 #failure=3
Environment Variables from:
kubernetes-services-endpoint ConfigMap Optional: true
Environment:
DATASTORE_TYPE: kubernetes
WAIT_FOR_DATASTORE: true
NODENAME: (v1:spec.nodeName)
CALICO_NETWORKING_BACKEND: <set to the key 'calico_backend' of config map 'calico-config'> Optional: false
CLUSTER_TYPE: k8s,bgp
IP: autodetect
CALICO_IPV4POOL_IPIP: Always
CALICO_IPV4POOL_VXLAN: Never
CALICO_IPV6POOL_VXLAN: Never
FELIX_IPINIPMTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false
FELIX_VXLANMTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false
FELIX_WIREGUARDMTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false
CALICO_DISABLE_FILE_LOGGING: true
FELIX_DEFAULTENDPOINTTOHOSTACTION: ACCEPT
FELIX_IPV6SUPPORT: false
FELIX_HEALTHENABLED: true
Mounts:
/host/etc/cni/net.d from cni-net-dir (rw)
/lib/modules from lib-modules (ro)
/run/xtables.lock from xtables-lock (rw)
/sys/fs/bpf from bpffs (rw)
/var/lib/calico from var-lib-calico (rw)
/var/log/calico/cni from cni-log-dir (ro)
/var/run/calico from var-run-calico (rw)
/var/run/nodeagent from policysync (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5tqjg (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
var-run-calico:
Type: HostPath (bare host directory volume)
Path: /var/run/calico
HostPathType:
var-lib-calico:
Type: HostPath (bare host directory volume)
Path: /var/lib/calico
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
sys-fs:
Type: HostPath (bare host directory volume)
Path: /sys/fs/
HostPathType: DirectoryOrCreate
bpffs:
Type: HostPath (bare host directory volume)
Path: /sys/fs/bpf
HostPathType: Directory
nodeproc:
Type: HostPath (bare host directory volume)
Path: /proc
HostPathType:
cni-bin-dir:
Type: HostPath (bare host directory volume)
Path: /opt/cni/bin
HostPathType:
cni-net-dir:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d
HostPathType:
cni-log-dir:
Type: HostPath (bare host directory volume)
Path: /var/log/calico/cni
HostPathType:
host-local-net-dir:
Type: HostPath (bare host directory volume)
Path: /var/lib/cni/networks
HostPathType:
policysync:
Type: HostPath (bare host directory volume)
Path: /var/run/nodeagent
HostPathType: DirectoryOrCreate
kube-api-access-5tqjg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: :NoSchedule op=Exists
:NoExecute op=Exists
CriticalAddonsOnly op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31s default-scheduler Successfully assigned kube-system/calico-node-766fp to k8s-master
Normal Pulling 30s kubelet Pulling image "docker.io/calico/cni:v3.25.0"
[root@k8s-master opt]#
[root@k8s-master opt]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-64cc74d646-2dzgs 1/1 Running 0 2m23s
calico-node-krl8c 1/1 Running 0 2m23s
calico-node-l25cd 1/1 Running 0 2m23s
calico-node-vzspk 1/1 Running 0 2m23s
coredns-6d8c4cb4d-fz9xx 1/1 Running 0 7m7s
coredns-6d8c4cb4d-wpprb 1/1 Running 0 7m7s
etcd-k8s-master 1/1 Running 0 7m19s
kube-apiserver-k8s-master 1/1 Running 0 7m19s
kube-controller-manager-k8s-master 1/1 Running 0 7m24s
kube-proxy-7sl4n 1/1 Running 0 3m46s
kube-proxy-sdzx5 1/1 Running 0 7m7s
kube-proxy-tbhp7 1/1 Running 0 3m46s
kube-scheduler-k8s-master 1/1 Running 0 7m19s
[root@k8s-master opt]#
[root@k8s-master opt]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 7m49s v1.23.6
k8s-node-1 Ready <none> 4m10s v1.23.6
k8s-node-2 Ready <none> 4m10s v1.23.6
[root@k8s-master opt]#
# 此时master节点拉取的镜像如下:
[root@k8s-master opt]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
calico/cni v3.25.0 d70a5947d57e 20 months ago 198MB
calico/node v3.25.0 08616d26b8e7 20 months ago 245MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.23.6 8fa62c12256d 2 years ago 135MB
registry.aliyuncs.com/google_containers/kube-proxy v1.23.6 4c0375452406 2 years ago 112MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.23.6 df7b72818ad2 2 years ago 125MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.23.6 595f327f224a 2 years ago 53.5MB
registry.aliyuncs.com/google_containers/etcd 3.5.1-0 25f8c7f3da61 2 years ago 293MB
registry.aliyuncs.com/google_containers/coredns v1.8.6 a4ca41631cc7 2 years ago 46.8MB
registry.aliyuncs.com/google_containers/pause 3.6 6270bb605e12 3 years ago 683kB
[root@k8s-master opt]#
# 此时node节点拉取的镜像如下:
[root@k8s-node-1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 39286ab8a5e1 6 weeks ago 188MB
calico/cni v3.25.0 d70a5947d57e 20 months ago 198MB
calico/node v3.25.0 08616d26b8e7 20 months ago 245MB
registry.aliyuncs.com/google_containers/kube-proxy v1.23.6 4c0375452406 2 years ago 112MB
registry.aliyuncs.com/google_containers/pause 3.6 6270bb605e12 3 years ago 683kB
[root@k8s-node-1 ~]# 5. 在任意节点使用kubectl(可选)(如果只需要在Master节点使用kubectl, 可忽略)
5-1 在Node节点使用kubectl
# 在所有Node节点都执行:
[root@k8s-node-1 ~]# kubectl get cs
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@k8s-node-1 ~]#5-2 将Master节点的admin.conf拷贝到Node节点
先将Master节点/etc/kubernetes/admin.conf文件拷贝到所有Node节点的/etc/kubernetes目录下
# 然后在所有Node节点都执行:
[root@k8s-node-1 kubernetes]# pwd
/etc/kubernetes
[root@k8s-node-1 kubernetes]# ls
admin.conf kubelet.conf manifests pki
[root@k8s-node-1 kubernetes]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@k8s-node-1 kubernetes]# source ~/.bash_profile
[root@k8s-node-1 kubernetes]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 42d v1.23.6
k8s-node-1 Ready <none> 5h53m v1.23.6
k8s-node-2 Ready <none> 5h25m v1.23.6
[root@k8s-node-1 kubernetes]#6. 创建Pod验证集群是否部署完成
6-1 创建Nginx Pod
# 在Master节点执行(如果按第5步进行了配置, 则在Master节点或者任意一个Node节点都可执行):
[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 0/1 1 0 9s
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-85b98978db-9p7r8 1/1 Running 0 53s
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
nginx NodePort 10.111.61.155 <none> 80:30801/TCP 4s
[root@k8s-master ~]#6-2 访问nginx
# 在Master节点或者任意一个Node节点执行:
[root@k8s-master opt]# curl http://127.0.0.1:30801
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8s-master opt]#6-3 删除Nginx Pod
[root@k8s-master ~]# kubectl delete deployment nginx
deployment.apps "nginx" deleted
[root@k8s-master ~]# kubectl delete service nginx
service "nginx" deleted
[root@k8s-master ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-85b98978db-9p7r8 1/1 Running 2 (2d18h ago) 3d2h
[root@k8s-master ~]#7. 部署NFS网络文件系统
7-1 安装nfs-utils
# 在所有节点(包括Master, Node, NFS)都执行:
[root@k8s-node-1 ~]# yum -y install nfs-utils
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
软件包 1:nfs-utils-1.3.0-0.68.el7.2.x86_64 已安装并且是最新版本
无须任何处理
[root@k8s-master ~]## 在NFS节点执行:
[root@k8s-nfs nfs]# systemctl enable nfs-server
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@k8s-nfs nfs]#
[root@k8s-nfs nfs]# systemctl start nfs-server
[root@k8s-nfs nfs]# systemctl status nfs-server
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Active: active (exited) since 二 2024-10-08 10:44:38 CST; 3s ago
Process: 63113 ExecStartPost=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl reload gssproxy ; fi (code=exited, status=0/SUCCESS)
Process: 63096 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 63094 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 63096 (code=exited, status=0/SUCCESS)
Tasks: 0
Memory: 0B
CGroup: /system.slice/nfs-server.service
10月 08 10:44:38 k8s-master systemd[1]: Starting NFS server and services...
10月 08 10:44:38 k8s-master systemd[1]: Started NFS server and services.
[root@k8s-nfs nfs]#
[root@k8s-nfs nfs]# cat /proc/fs/nfsd/versions
-2 +3 +4 +4.1 +4.2
[root@k8s-nfs nfs]#7-2 在NFS节点配置NFS共享策略
# 在NFS节点执行:
[root@k8s-nfs nfs]# pwd
/opt/nfs
[root@k8s-nfs nfs]# ls -lhra
总用量 0
drwxr-xr-x. 2 root root 6 9月 30 01:15 rw
drwxr-xr-x. 3 root root 17 10月 8 18:49 ..
drwxr-xr-x. 4 root root 26 10月 8 18:49 .
[root@k8s-nfs nfs]# vi /etc/exports
# 编辑/etc/exports文件, 添加内容如下(其中rw设置为可读写, ro设置为只读, 此处选择rw):
/opt/nfs/rw 192.168.19.0/24(rw,sync,no_subtree_check,no_root_squash)# 在NFS节点执行:
[root@k8s-nfs nfs]# exportfs -f
[root@k8s-nfs nfs]# systemctl reload nfs-server
[root@k8s-nfs nfs]#
[root@k8s-nfs nfs]# systemctl status nfs-server
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since 二 2024-10-08 18:55:03 CST; 6min ago
Process: 15147 ExecReload=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Process: 15113 ExecStartPost=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl reload gssproxy ; fi (code=exited, status=0/SUCCESS)
Process: 15060 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 15059 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 15060 (code=exited, status=0/SUCCESS)
10月 08 18:55:02 k8s-nfs systemd[1]: Starting NFS server and services...
10月 08 18:55:03 k8s-nfs systemd[1]: Started NFS server and services.
10月 08 19:01:49 k8s-nfs systemd[1]: Reloading NFS server and services.
10月 08 19:01:49 k8s-nfs systemd[1]: Reloaded NFS server and services.
[root@k8s-nfs nfs]#7-3 挂载NFS网络文件系统(验证所需, 不验证可忽略)
# 在Master节点和所有Node节点都执行:
[root@k8s-master mnt]# pwd
/mnt
[root@k8s-master mnt]# mkdir nfs
[root@k8s-master mnt]# ls -lhra
总用量 0
drwxr-xr-x. 4 root root 26 10月 8 2024 nfs
dr-xr-xr-x. 17 root root 224 9月 26 22:16 ..
drwxr-xr-x. 3 root root 17 10月 8 11:08 .
[root@k8s-master mnt]#
[root@k8s-master mnt]# mount -t nfs 192.168.19.124:/opt/nfs /mnt/nfs
[root@k8s-master mnt]#
[root@k8s-master mnt]# cd nfs/
[root@k8s-master nfs]# ls -lhra
总用量 0
drwxr-xr-x. 2 root root 21 10月 8 2024 rw
drwxr-xr-x. 2 root root 6 9月 30 01:15 ro
drwxr-xr-x. 3 root root 17 10月 8 11:08 ..
drwxr-xr-x. 4 root root 26 10月 8 2024 .
[root@k8s-master nfs]#7-4 验证NFS网络文件系统(不验证可忽略)
# 在Master节点或者任意一个Node节点执行:
[root@k8s-master nfs]# cd ro
[root@k8s-master ro]# touch 456.txt
touch: 无法创建"456.txt": 只读文件系统
[root@k8s-master ro]# cd ../rw
[root@k8s-master rw]# touch 123.txt
[root@k8s-master rw]# echo 'hello' > 123.txt
...
# 切换另一个Node节点或者Master也可以看到编辑后的文件
[root@k8s-node-1 rw]# cat 123.txt
hello
[root@k8s-node-1 rw]#8. 基于NFS配置K8S集群存储动态制备器
8-1 配置RBAC, 定义调用K8S API的用户, 并配置其权限
# 在Master节点执行(如果按第5步进行了配置, 则在Master节点或者任意一个Node节点都可执行):
[root@k8s-master nfs]# pwd
/opt/nfs
[root@k8s-master nfs]# vi nfs-rbac.yaml
# 文件内容如下:
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["services", "endpoints", "pods", "nodes"]
verbs: ["get", "list", "watch", "create", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io8-2 配置Provisioner, 定义制备器
# 在Master节点执行(如果按第5步进行了配置, 则在Master节点或者任意一个Node节点都可执行):
[root@k8s-master nfs]# pwd
/opt/nfs
[root@k8s-master nfs]# vi nfs-deployment.yaml
# 文件内容如下:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner-rw
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root-rw
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-provisioner
- name: NFS_SERVER
value: 192.168.19.124
- name: NFS_PATH
value: /opt/nfs/rw # NFS服务器上的读写路径
volumes:
- name: nfs-client-root-rw
nfs:
server: 192.168.19.124
path: /opt/nfs/rw # NFS服务器路径8-3 配置storageclass, 定义存储类
# 在Master节点执行(如果按第5步进行了配置, 则在Master节点或者任意一个Node节点都可执行):
[root@k8s-master nfs]# pwd
/opt/nfs
[root@k8s-master nfs]# vi nfs-storage-class.yaml
# 文件内容如下:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-client-rw
annotations:
storageclass.kubernetes.io/is-default-class: "true" # 设置为默认StorageClass
provisioner: nfs-provisioner
parameters:
archiveOnDelete: "false"
mountOptions:
- rw # 读写权限
reclaimPolicy: Delete # PVC删除时自动删除PV8-4 创建storageclass
# 在Master节点执行(如果按第5步进行了配置, 则在Master节点或者任意一个Node节点都可执行):
[root@k8s-master nfs]# pwd
/opt/nfs
[root@k8s-master nfs]# kubectl apply -f nfs-rbac.yaml
[root@k8s-master nfs]# kubectl apply -f nfs-deployment.yaml
[root@k8s-master nfs]# kubectl apply -f nfs-storage-class.yaml
[root@k8s-master nfs]#
[root@k8s-master nfs]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client-rw (default) nfs-provisioner Delete Immediate false 5s8-5 解决可能存在的问题(不存在问题可忽略)
问题1: selfLink was empty
# 查看pvc动态制备器日志, 若存在selfLink was empty, can't make reference相关错误信息(如下示例), 则为k8s在1.20.*版本之后默认关闭了指定API # 此问题相关issue: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/issues/25 [root@k8s-master nfs]# kubectl get po NAME READY STATUS RESTARTS AGE nfs-client-provisioner-78bbb57865-mfrbc 1/1 Running 0 16h [root@k8s-master nfs]# [root@k8s-master nfs]# kubectl logs nfs-client-provisioner-78bbb57865-mfrbc I1012 15:11:48.679214 1 leaderelection.go:185] attempting to acquire leader lease default/nfs-provisioner... E1012 15:11:48.693154 1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nfs-provisioner", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4d585a72-a76f-4084-8f04-2e63837404b4", ResourceVersion:"116101", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63864342708, loc:(*time.Location)(0x1956800)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"nfs-client-provisioner-78bbb57865-tg5nm_4d711e2f-88ac-11ef-95cc-7e5c54d15e8a\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2024-10-12T15:11:48Z\",\"renewTime\":\"2024-10-12T15:11:48Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'nfs-client-provisioner-78bbb57865-tg5nm_4d711e2f-88ac-11ef-95cc-7e5c54d15e8a became leader' [root@k8s-master nfs]# ... # 解决方法, 修改k8s配置文件, 在spec.containers.command下添加一行配置--feature-gates=RemoveSelfLink=false # 修改配置后执行kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml重新应用此配置 [root@k8s-master nfs]# find / -name kube-apiserver.yaml /etc/kubernetes/manifests/kube-apiserver.yaml [root@k8s-master nfs]# 示例如下: spec: containers: - command: - kube-apiserver - --feature-gates=RemoveSelfLink=false - --advertise-address=192.168.19.121问题2: Failed to update lock | Failed to create lock
# 查看pvc动态制备器日志, 若存在Failed to update lock或者Failed to create lock相关错误信息(如下示例), 则为nfs-provisioner缺少权限 [root@k8s-master nfs]# kubectl get po NAME READY STATUS RESTARTS AGE nfs-client-provisioner-78bbb57865-mfrbc 1/1 Running 0 16h [root@k8s-master nfs]# [root@k8s-master nfs]# kubectl logs nfs-client-provisioner-78bbb57865-mfrbc I1012 15:11:48.693319 1 controller.go:631] Starting provisioner controller nfs-provisioner_nfs-client-provisioner-78bbb57865-tg5nm_4d711e2f-88ac-11ef-95cc-7e5c54d15e8a! I1012 15:11:48.794736 1 controller.go:680] Started provisioner controller nfs-provisioner_nfs-client-provisioner-78bbb57865-tg5nm_4d711e2f-88ac-11ef-95cc-7e5c54d15e8a! E1012 15:11:50.705608 1 leaderelection.go:268] Failed to update lock: endpoints "nfs-provisioner" is forbidden: User "system:serviceaccount:default:nfs-client-provisioner" cannot update resource "endpoints" in API group "" in the namespace "default" [root@k8s-master nfs]# ... # 解决方法, 修改nfs-rbac.yaml配置文件, 根据日志中描述的资源组, 添加响应的权限(示例中为endpoints缺少update权限) 示例如下: rules: - apiGroups: [""] resources: ["services", "endpoints", "pods", "nodes"] verbs: ["get", "list", "watch", "create", "update"]
8-6 使用Nginx验证动态制备器
准备Nginx所需配置文件, 并运行Nginx
# 在Master节点执行(如果按第5步进行了配置, 则在Master节点或者任意一个Node节点都可执行):
[root@k8s-master nginx]# ls
nginx-deployment.yaml nginx-pvc.yaml nginx-service.yaml
[root@k8s-master nginx]# pwd
/opt/nginx
[root@k8s-master nginx]# cat nginx-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pvc
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
[root@k8s-master nginx]#
[root@k8s-master nginx]# cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: nginx-html
mountPath: /usr/share/nginx/html # 挂载到 nginx 的默认路径
volumes:
- name: nginx-html
persistentVolumeClaim:
claimName: nginx-pvc # 挂载 PVC
[root@k8s-master nginx]#
[root@k8s-master nginx]# cat nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort # 通过 NodePort 访问
[root@k8s-master nginx]#
[root@k8s-master nginx]# kubectl apply -f nginx-pvc.yaml
persistentvolumeclaim/nginx-pvc created
[root@k8s-master nginx]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-pvc Bound pvc-2e5e5218-4fee-4a9c-8f8e-526bda9d1563 1Gi RWX nfs-client-rw 5s
[root@k8s-master nginx]#
[root@k8s-master nginx]# kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
[root@k8s-master nginx]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nfs-client-provisioner 1/1 1 1 16h
nginx-deployment 1/1 1 1 13s
[root@k8s-master nginx]#
[root@k8s-master nginx]# kubectl apply -f nginx-service.yaml
service/nginx-service created
[root@k8s-master nginx]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16d
nginx-service NodePort 10.106.128.226 <none> 80:32106/TCP 7s
[root@k8s-master nginx]#
# 此时访问nginx首页, 由于默认主页被挂载到NFS, 而首页文件还未创建, 所以访问403
[root@k8s-master nginx]# curl http://127.0.0.1:32106
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.2</center>
</body>
</html>
[root@k8s-master nginx]#
[root@k8s-master nginx]# kubectl get po
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-78bbb57865-mfrbc 1/1 Running 0 16h
nginx-deployment-588f845875-wj6ps 1/1 Running 0 57s
在NFS服务器共享目录找到目标pod的存储目录, 创建index.html
# 在NFS服务器执行
[root@k8s-nfs ~]# cd /opt/nfs/
[root@k8s-nfs nfs]# ls
rw
[root@k8s-nfs nfs]# cd rw/
[root@k8s-nfs rw]# ls
123.txt aaa.txt default-nginx-pvc-pvc-2e5e5218-4fee-4a9c-8f8e-526bda9d1563
[root@k8s-nfs rw]# cd default-nginx-pvc-pvc-2e5e5218-4fee-4a9c-8f8e-526bda9d1563/
[root@k8s-nfs default-nginx-pvc-pvc-2e5e5218-4fee-4a9c-8f8e-526bda9d1563]# vi index.html
# 在index.html文件中写入任意内容
[root@k8s-nfs default-nginx-pvc-pvc-2e5e5218-4fee-4a9c-8f8e-526bda9d1563]# ls -lhra
总用量 4.0K
-rw-r--r--. 1 root root 6 10月 13 16:27 index.html
drwxrwxrwx. 3 root root 102 10月 13 16:24 ..
drwxrwxrwx. 2 nfsnobody nfsnobody 24 10月 13 16:27 .
[root@k8s-nfs default-nginx-pvc-pvc-2e5e5218-4fee-4a9c-8f8e-526bda9d1563]# 重新在K8S节点尝试访问Nginx
# 在Master节点执行(如果按第5步进行了配置, 则在Master节点或者任意一个Node节点都可执行):
# 此时会响应NFS中配置的index.html文件中的内容
[root@k8s-master nginx]# curl http://127.0.0.1:32106
hello
[root@k8s-master nginx]#验证完毕删除nginx相关资源
# 在nginx-pvc.yaml等配置文件所在的目录执行
[root@k8s-master nfs]# ls
nfs-deployment.yaml nfs-rbac.yaml nfs-storage-class.yaml nginx-deployment.yaml nginx-pvc.yaml nginx-service.yaml
[root@k8s-master nfs]# kubectl delete -f nginx-service.yaml
service "nginx-service" deleted
[root@k8s-master nfs]# kubectl delete -f nginx-deployment.yaml
deployment.apps "nginx-deployment" deleted
[root@k8s-master nfs]# kubectl delete -f nginx-pvc.yaml
persistentvolumeclaim "nginx-pvc" deleted
[root@k8s-master nfs]#9. 安装Kubesphere(3.4版本)
9.1 下载kubesphere配置文件
# 在Master节点执行(如果按第5步进行了配置, 则在Master节点或者任意一个Node节点都可执行):
[root@k8s-master kubesphere]# pwd
/opt/kubesphere
[root@k8s-master kubesphere]# wget https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/kubesphere-installer.yaml
[root@k8s-master kubesphere]# wget https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/cluster-configuration.yaml
[root@k8s-master kubesphere]# wget https://raw.githubusercontent.com/kubesphere/ks-installer/release-3.1/scripts/kubesphere-delete.sh9.2 安装kubesphere
# 在下载了kubesphere配置文件的节点执行
[root@k8s-master kubesphere]# ls
cluster-configuration.yaml kubesphere-delete.sh kubesphere-installer.yaml
[root@k8s-master kubesphere]# kubectl apply -f kubesphere-installer.yaml
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
namespace/kubesphere-system created
serviceaccount/ks-installer created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created
[root@k8s-master kubesphere]#
[root@k8s-master kubesphere]# kubectl apply -f cluster-configuration.yaml
clusterconfiguration.installer.kubesphere.io/ks-installer created
[root@k8s-master kubesphere]#
[root@k8s-master kubesphere]# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
2024-10-14T21:52:42+08:00 INFO : shell-operator latest
2024-10-14T21:52:42+08:00 INFO : Use temporary dir: /tmp/shell-operator
2024-10-14T21:52:42+08:00 INFO : Initialize hooks manager ...
2024-10-14T21:52:42+08:00 INFO : Search and load hooks ...
2024-10-14T21:52:42+08:00 INFO : Load hook config from '/hooks/kubesphere/installRunner.py'
2024-10-14T21:52:42+08:00 INFO : HTTP SERVER Listening on 0.0.0.0:9115
2024-10-14T21:52:43+08:00 INFO : Load hook config from '/hooks/kubesphere/schedule.sh'
2024-10-14T21:52:43+08:00 INFO : Initializing schedule manager ...
2024-10-14T21:52:43+08:00 INFO : KUBE Init Kubernetes client
2024-10-14T21:52:43+08:00 INFO : KUBE-INIT Kubernetes client is configured successfully
2024-10-14T21:52:43+08:00 INFO : MAIN: run main loop
2024-10-14T21:52:43+08:00 INFO : MAIN: add onStartup tasks
2024-10-14T21:52:43+08:00 INFO : QUEUE add all HookRun@OnStartup
2024-10-14T21:52:43+08:00 INFO : Running schedule manager ...
2024-10-14T21:52:43+08:00 INFO : MSTOR Create new metric shell_operator_live_ticks
2024-10-14T21:52:43+08:00 INFO : MSTOR Create new metric shell_operator_tasks_queue_length
2024-10-14T21:52:43+08:00 INFO : GVR for kind 'ClusterConfiguration' is installer.kubesphere.io/v1alpha1, Resource=clusterconfigurations
2024-10-14T21:52:43+08:00 INFO : EVENT Kube event '484f624f-1d3f-47a8-9d5c-8c932c6e2f59'
2024-10-14T21:52:43+08:00 INFO : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py
2024-10-14T21:52:46+08:00 INFO : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py
2024-10-14T21:52:46+08:00 INFO : Running hook 'kubesphere/installRunner.py' binding 'KUBE_EVENTS' ...
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************
...
PLAY RECAP *********************************************************************
localhost : ok=30 changed=22 unreachable=0 failed=0 skipped=17 rescued=0 ignored=0
Start installing monitoring
Start installing multicluster
Start installing openpitrix
Start installing network
**************************************************
Waiting for all tasks to be completed ...
task network status is successful (1/4)
task openpitrix status is successful (2/4)
task multicluster status is successful (3/4)
task monitoring status is successful (4/4)
**************************************************
Collecting installation results ...
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.19.121:30880
Account: admin
Password: P@88w0rd
NOTES:
1. After you log into the console, please check the
monitoring status of service components in
"Cluster Management". If any service is not
ready, please wait patiently until all components
are up and running.
2. Please change the default password after login.
#####################################################
https://kubesphere.io 2024-10-14 22:15:57
#####################################################9.3 彻底卸载kubesphere(可忽略, 当不使用kubesphere时执行)
# 在下载了kubesphere配置文件的节点执行
[root@k8s-master kubesphere]# pwd
/opt/kubesphere
[root@k8s-master kubesphere]# ls
cluster-configuration.yaml kubesphere-delete.sh kubesphere-installer.yaml
[root@k8s-master kubesphere]# chmod +x kubesphere-delete.sh
[root@k8s-master kubesphere]# sh kubesphere-delete.sh
[root@k8s-master kubesphere]#