k8s集群安装KubeSphere3.2.1

https://www.cnblogs.com/lfl17718347843/p/14131111.html

https://www.cnblogs.com/lfl17718347843/p/14131062.html (主要参考文档)

https://v3-1.docs.kubesphere.io/zh/docs/quick-start/minimal-kubesphere-on-k8s/

https://github.com/kubesphere/ks-installer/blob/master/deploy/cluster-configuration.yaml

K8S 安装请参考当前文件夹的其他文档

部署可视化插件KubeSphere

使用KubeSphere前,需要指定默认StorageClass

本例使用NFS作为k8s默认的StorageClass,NFS的安装和配置请参考《centos7 安装和配置 NFS》

在NFS服务器中创建共享挂载目录,并设置目录的权限

1mkdir /data/nfs-data/kubernetes
2chown nfsnobody:nfsnobody /data/nfs-data/kubernetes

下载部署nfs插件的配置文件:

1wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/class.yaml
2wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/deployment.yaml
3wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/rbac.yaml
4wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/test-claim.yaml

修改配置:

 1[root@k8s-master ~]# vi deployment.yaml
 2apiVersion: apps/v1
 3kind: Deployment
 4metadata:
 5  name: nfs-client-provisioner
 6  labels:
 7    app: nfs-client-provisioner
 8  namespace: default
 9spec:
10  replicas: 1
11  strategy:
12    type: Recreate
13  selector:
14    matchLabels:
15      app: nfs-client-provisioner
16  template:
17    metadata:
18      labels:
19        app: nfs-client-provisioner
20    spec:
21      serviceAccountName: nfs-client-provisioner
22      containers:
23        - name: nfs-client-provisioner
24          image: quay.io/external_storage/nfs-client-provisioner:latest
25          volumeMounts:
26            - name: nfs-client-root
27              mountPath: /persistentvolumes
28          env:
29            - name: PROVISIONER_NAME
30              value: fuseim.pri/ifs #服务提供者名称必须和class.yaml中的provisioner的名称一致
31            - name: NFS_SERVER
32              value: k8s-master #NFS服务器的IP获取域名
33            - name: NFS_PATH
34              value: /data/nfs-data/kubernetes #NFS服务器中的共享挂载目录
35      volumes:
36        - name: nfs-client-root
37          nfs:
38            server: k8s-master #NFS服务器的IP获取域名
39            path: /data/nfs-data/kubernetes #NFS服务器中的共享挂载目录

部署上述四个yaml文件

1[root@k8s-master ~]# kubectl apply -f .
2storageclass.storage.k8s.io/managed-nfs-storage created
3deployment.apps/nfs-client-provisioner created
4serviceaccount/nfs-client-provisioner created
5clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
6clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
7role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
8rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
9persistentvolumeclaim/test-claim created

查看服务是否正常

1[root@k8s-master ~]# kubectl get pod
2NAME                                     READY   STATUS    RESTARTS   AGE
3nfs-client-provisioner-7564dfd4b-sxqnq   1/1     Running   0          3m28s

列出你的集群中的StorageClass

1[root@k8s-master ~]# kubectl get storageclass
2NAME                  PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
3managed-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  19m

标记一个StorageClass为默认的 (是storageclass的名字也就是你部署的StorageClass名字是啥就写啥)

1[root@k8s-master ~]# kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
2storageclass.storage.k8s.io/managed-nfs-storage patched

验证你选用为默认的StorageClass

1[root@k8s-master ~]# kubectl get storageclass
2NAME                            PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
3managed-nfs-storage (default)   fuseim.pri/ifs   Delete          Immediate           false                  20m

问题解决:

安装时发现kubectl get pod 一直处于 ContainerCreating 状态,使用命令:

 1[root@k8s-master ~]# kubectl describe pod nfs-client-provisioner-7564dfd4b-z9v26
 2Name:         nfs-client-provisioner-7564dfd4b-z9v26
 3Namespace:    default
 4Priority:     0
 5Node:         k8s-node2/172.16.2.137
 6Start Time:   Thu, 17 Jun 2021 14:34:27 +0800
 7Labels:       app=nfs-client-provisioner
 8              pod-template-hash=7564dfd4b
 9Annotations:  <none>
10Status:       Running
11IP:           10.244.2.6
12IPs:
13  IP:           10.244.2.6
14Controlled By:  ReplicaSet/nfs-client-provisioner-7564dfd4b
15Containers:
16  nfs-client-provisioner:
17    Container ID:   docker://c934f32866245eb25278b8cef2459f43a481570ac96dfc959fab0efd60d76005
18    Image:          quay.io/external_storage/nfs-client-provisioner:latest
19    Image ID:       docker-pullable://quay.io/external_storage/nfs-client-provisioner@sha256:022ea0b0d69834b652a4c53655d78642ae23f0324309097be874fb58d09d2919
20    Port:           <none>
21    Host Port:      <none>
22    State:          Running
23      Started:      Thu, 17 Jun 2021 14:43:03 +0800
24    Ready:          True
25    Restart Count:  0
26    Environment:
27      PROVISIONER_NAME:  fuseim.pri/ifs
28      NFS_SERVER:        k8s-master
29      NFS_PATH:          /data/nfs-data/kubernetes
30    Mounts:
31      /persistentvolumes from nfs-client-root (rw)
32      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bfb8g (ro)
33Conditions:
34  Type              Status
35  Initialized       True 
36  Ready             True 
37  ContainersReady   True 
38  PodScheduled      True 
39Volumes:
40  nfs-client-root:
41    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
42    Server:    k8s-master
43    Path:      /data/nfs-data/kubernetes
44    ReadOnly:  false
45  kube-api-access-bfb8g:
46    Type:                    Projected (a volume that contains injected data from multiple sources)
47    TokenExpirationSeconds:  3607
48    ConfigMapName:           kube-root-ca.crt
49    ConfigMapOptional:       <nil>
50    DownwardAPI:             true
51QoS Class:                   BestEffort
52Node-Selectors:              <none>
53Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
54                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
55Events:
56  Type     Reason       Age                     From               Message
57  ----     ------       ----                    ----               -------
58  Normal   Scheduled    9m10s                   default-scheduler  Successfully assigned default/nfs-client-provisioner-7564dfd4b-z9v26 to k8s-node2
59  Warning  FailedMount  4m54s                   kubelet            Unable to attach or mount volumes: unmounted volumes=[nfs-client-root], unattached volumes=[kube-api-access-bfb8g nfs-client-root]: timed out waiting for the condition
60  Warning  FailedMount  2m57s (x11 over 9m10s)  kubelet            MountVolume.SetUp failed for volume "nfs-client-root" : mount failed: exit status 32
61Mounting command: mount
62Mounting arguments: -t nfs k8s-master:/data/nfs-data/kubernetes /var/lib/kubelet/pods/8af9d4b5-8e25-4757-bf03-543f6483936e/volumes/kubernetes.io~nfs/nfs-client-root
63Output: mount.nfs: mounting k8s-master:/data/nfs-data/kubernetes failed, reason given by server: No such file or directory
64  Warning  FailedMount  2m40s (x2 over 7m8s)  kubelet  Unable to attach or mount volumes: unmounted volumes=[nfs-client-root], unattached volumes=[nfs-client-root kube-api-access-bfb8g]: timed out waiting for the condition
65  Normal   Pulling      54s                   kubelet  Pulling image "quay.io/external_storage/nfs-client-provisioner:latest"
66  Normal   Pulled       35s                   kubelet  Successfully pulled image "quay.io/external_storage/nfs-client-provisioner:latest" in 18.864370694s
67  Normal   Created      35s                   kubelet  Created container nfs-client-provisioner
68  Normal   Started      35s                   kubelet  Started container nfs-client-provisioner

上面说,挂载的目录不存在,因此,先要创建共享的目录,并设置好权限。

排除问题后,使用下面的命令重启pod

1[root@k8s-master ~]# kubectl get pod nfs-client-provisioner-7564dfd4b-sxqnq -n default -o yaml | kubectl replace --force -f -
2pod "nfs-client-provisioner-7564dfd4b-z9v26" deleted
3
4
5pod/nfs-client-provisioner-7564dfd4b-z9v26 replaced

安装KubeSphere

 1[root@k8s-master ~]# wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml
 2
 3[root@k8s-master ~]# kubectl apply -f kubesphere-installer.yaml
 4Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
 5customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
 6namespace/kubesphere-system created
 7serviceaccount/ks-installer created
 8clusterrole.rbac.authorization.k8s.io/ks-installer created
 9clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
10deployment.apps/ks-installer created
11
12
13
14
15[root@k8s-master ~]# wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml
16
17[root@k8s-master ~]# vim cluster-configuration.yaml
18修改如下参数:
19spec
20  alerting
21    enabled: true
22  devops:
23    enabled: true
24    jenkinsMemoryReq: 1000Mi
25
26[root@k8s-master ~]# kubectl apply -f cluster-configuration.yaml
27clusterconfiguration.installer.kubesphere.io/ks-installer created

检查安装日志:

1kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

如果显示如下内容,表示安装成功:

 1PLAY RECAP *********************************************************************
 2localhost                  : ok=31   changed=25   unreachable=0    failed=0    skipped=15   rescued=0    ignored=0   
 3
 4Start installing monitoring
 5Start installing multicluster
 6Start installing openpitrix
 7Start installing network
 8**************************************************
 9Waiting for all tasks to be completed ...
10task openpitrix status is successful  (1/4)
11task network status is successful  (2/4)
12task multicluster status is successful  (3/4)
13task monitoring status is successful  (4/4)
14**************************************************
15Collecting installation results ...
16#####################################################
17###              Welcome to KubeSphere!           ###
18#####################################################
19
20Console: http://172.16.2.135:30880
21Account: admin
22Password: P@88w0rd
23
24NOTES:
25  1. After you log into the console, please check the
26     monitoring status of service components in
27     "Cluster Management". If any service is not
28     ready, please wait patiently until all components 
29     are up and running.
30  2. Please change the default password after login.
31
32#####################################################
33https://kubesphere.io             2021-06-17 15:29:21
34#####################################################

使用 kubectl get pod –all-namespaces 查看所有 Pod 是否在 KubeSphere 的相关命名空间中正常运行。

 1[root@k8s-master ~]# kubectl get pod --all-namespaces
 2NAMESPACE                      NAME                                               READY   STATUS    RESTARTS   AGE
 3default                        nfs-client-provisioner-7564dfd4b-sxqnq             1/1     Running   0          45m
 4kube-system                    coredns-545d6fc579-9pmxl                           1/1     Running   1          6d
 5kube-system                    coredns-545d6fc579-qkxhl                           1/1     Running   1          6d
 6kube-system                    etcd-k8s-master                                    1/1     Running   1          6d
 7kube-system                    kube-apiserver-k8s-master                          1/1     Running   1          6d
 8kube-system                    kube-controller-manager-k8s-master                 1/1     Running   3          6d
 9kube-system                    kube-flannel-ds-9psmw                              1/1     Running   1          5d22h
10kube-system                    kube-flannel-ds-tx5vd                              1/1     Running   1          5d22h
11kube-system                    kube-flannel-ds-wthph                              1/1     Running   1          5d22h
12kube-system                    kube-proxy-9km4t                                   1/1     Running   1          6d
13kube-system                    kube-proxy-dp8gw                                   1/1     Running   1          6d
14kube-system                    kube-proxy-fm2x9                                   1/1     Running   1          6d
15kube-system                    kube-scheduler-k8s-master                          1/1     Running   3          6d
16kube-system                    snapshot-controller-0                              1/1     Running   0          12m
17kubesphere-controls-system     default-http-backend-5bf68ff9b8-xfbmn              1/1     Running   0          12m
18kubesphere-controls-system     kubectl-admin-7b69cb97d5-q942q                     1/1     Running   0          7m8s
19kubesphere-monitoring-system   alertmanager-main-0                                2/2     Running   0          5m15s
20kubesphere-monitoring-system   alertmanager-main-1                                2/2     Running   0          5m15s
21kubesphere-monitoring-system   alertmanager-main-2                                2/2     Running   0          5m14s
22kubesphere-monitoring-system   kube-state-metrics-687c7c4d86-zh9qb                3/3     Running   0          10m
23kubesphere-monitoring-system   node-exporter-55gwd                                2/2     Running   0          10m
24kubesphere-monitoring-system   node-exporter-55hs4                                2/2     Running   0          10m
25kubesphere-monitoring-system   node-exporter-l2trl                                2/2     Running   0          10m
26kubesphere-monitoring-system   notification-manager-deployment-7bd887ffb4-ggk47   1/1     Running   0          4m1s
27kubesphere-monitoring-system   notification-manager-deployment-7bd887ffb4-n9jss   1/1     Running   0          4m2s
28kubesphere-monitoring-system   notification-manager-operator-78595d8666-pw48b     2/2     Running   0          9m41s
29kubesphere-monitoring-system   prometheus-k8s-0                                   0/3     Pending   0          5m15s
30kubesphere-monitoring-system   prometheus-k8s-1                                   0/3     Pending   0          5m15s
31kubesphere-monitoring-system   prometheus-operator-d7fdfccbf-t5mlw                2/2     Running   0          10m
32kubesphere-system              ks-apiserver-645459688d-jxmkz                      1/1     Running   0          8m26s
33kubesphere-system              ks-console-7b6994bcd9-hxwz2                        1/1     Running   0          11m
34kubesphere-system              ks-controller-manager-5664f5c957-2xg5k             1/1     Running   0          8m26s
35kubesphere-system              ks-installer-6f4c9fcfcf-xv6hm                      1/1     Running   0          14m

如果是,请通过以下命令检查控制台的端口(默认为 30880):

1[root@k8s-master ~]# kubectl get svc/ks-console -n kubesphere-system
2NAME         TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
3ks-console   NodePort   10.105.29.94   <none>        80:30880/TCP   11m

确保在安全组中打开了端口 30880,并通过 NodePort (IP:30880) 使用默认帐户和密码 (admin/P@88w0rd) 访问 Web 控制台。

在浏览器中访问:

http://172.16.2.135:30880

即可登录后台,您可以在服务组件中检查不同组件的状态。

如果要使用相关服务,可能需要等待某些组件启动并运行。有些组件启动比较慢。

卸载

在页面 https://github.com/kubesphere/ks-installer/blob/release-3.1/scripts/kubesphere-delete.sh 下载脚本后执行。

然后重启所有集群节点即可。