久久久久久久视色,久久电影免费精品,中文亚洲欧美乱码在线观看,在线免费播放AV片

<center id="vfaef"><input id="vfaef"><table id="vfaef"></table></input></center>

    <p id="vfaef"><kbd id="vfaef"></kbd></p>

    
    
    <pre id="vfaef"><u id="vfaef"></u></pre>

      <thead id="vfaef"><input id="vfaef"></input></thead>

    1. 站長資訊網(wǎng)
      最全最豐富的資訊網(wǎng)站

      Kubeadm創(chuàng)建高可用Kubernetes v1.12.0集群

      節(jié)點(diǎn)規(guī)劃

      主機(jī)名 IP Role
      k8s-master01 10.3.1.20 etcd、Master、Node、keepalived
      k8s-master02 10.3.1.21 etcd、Master、Node、keepalived
      k8s-master03 10.3.1.25 etcd、Master、Node、keepalived
      VIP 10.3.1.29 None

      版本信息:

      • OS::Ubuntu 16.04
      • Docker:17.03.2-ce
      • k8s:v1.12

      來自官網(wǎng)的高可用架構(gòu)圖

      Kubeadm創(chuàng)建高可用Kubernetes v1.12.0集群

      高可用最重要的兩個(gè)組件:

      1. etcd:分布式鍵值存儲(chǔ)、k8s集群數(shù)據(jù)中心。
      2. kube-apiserver:集群的唯一入口,各組件通信樞紐。apiserver本身無狀態(tài),因此分布式很容易。

      其它核心組件:

      • controller-manager和scheduler也可以部署多個(gè),但只有一個(gè)處于活躍狀態(tài),以保證數(shù)據(jù)一致性。因?yàn)樗鼈儠?huì)改變集群狀態(tài)。
        集群各組件都是松耦合的,如何高可用就有很多種方式了。
      • kube-apiserver有多個(gè),那么apiserver客戶端應(yīng)該連接哪個(gè)了,因此就在apiserver前面加個(gè)傳統(tǒng)的類似于haproxy+keepalived方案漂個(gè)VIP出來,apiserver客戶端,比如kubelet、kube-proxy連接此VIP。

      安裝前準(zhǔn)備

      1、k8s各節(jié)點(diǎn)SSH免密登錄。
      2、時(shí)間同步。
      3、各Node必須關(guān)閉swap:swapoff -a,否則kubelet啟動(dòng)失敗。
      4、各節(jié)點(diǎn)主機(jī)名和IP加入/etc/hosts解析

      kubeadm創(chuàng)建高可用集群有兩種方法:

      1. etcd集群由kubeadm配置并運(yùn)行于pod,啟動(dòng)在Master節(jié)點(diǎn)之上。
      2. etcd集群單獨(dú)部署。
        etcd集群單獨(dú)部署,似乎更容易些,這里就以這種方法來部署。

      部署etcd集群

      etcd的正常運(yùn)行是k8s集群運(yùn)行的提前條件,因此部署k8s集群首先部署etcd集群。

      安裝CA證書

      安裝CFSSL證書管理工具

      直接下載二進(jìn)制安裝包:

      wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
      chmod +x cfssl_linux-amd64
      mv cfssl_linux-amd64 /opt/bin/cfssl

      wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
      chmod +x cfssljson_linux-amd64
      mv cfssljson_linux-amd64 /opt/bin/cfssljson

      wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
      chmod +x cfssl-certinfo_linux-amd64
      mv cfssl-certinfo_linux-amd64 /opt/bin/cfssl-certinfo

      echo “export PATH=/opt/bin:$PATH” > /etc/profile.d/k8s.sh

      所有k8s的執(zhí)行文件全部放入/opt/bin/目錄下

      創(chuàng)建CA配置文件

      root@k8s-master01:~# mkdir ssl
      root@k8s-master01:~# cd ssl/
      root@k8s-master01:~/ssl# cfssl print-defaults config > config.json
      root@k8s-master01:~/ssl# cfssl print-defaults csr > csr.json
      # 根據(jù)config.json文件的格式創(chuàng)建如下的ca-config.json文件
      # 過期時(shí)間設(shè)置成了 87600h

      root@k8s-master01:~/ssl# cat ca-config.json
      {
        “signing”: {
          “default”: {
            “expiry”: “87600h”
          },
          “profiles”: {
            “kubernetes”: {
              “usages”: [
                  “signing”,
                  “key encipherment”,
                  “server auth”,
                  “client auth”
              ],
              “expiry”: “87600h”
            }
          }
        }
      }

      創(chuàng)建CA證書簽名請求

      root@k8s-master01:~/ssl# cat ca-csr.json
      {
        “CN”: “kubernetes”,
        “key”: {
          “algo”: “rsa”,
          “size”: 2048
        },
        “names”: [
          {
            “C”: “CN”,
            “ST”: “GD”,
            “L”: “SZ”,
            “O”: “k8s”,
            “OU”: “System”
          }
        ]
      }

      生成CA證書和私匙

      root@k8s-master01:~/ssl# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
      root@k8s-master01:~/ssl# ls ca*
      ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

      拷貝ca證書到所有Node相應(yīng)目錄

      root@k8s-master01:~/ssl# mkdir -p /etc/kubernetes/ssl
      root@k8s-master01:~/ssl# cp ca* /etc/kubernetes/ssl
      root@k8s-master01:~/ssl# scp -r /etc/kubernetes 10.3.1.21:/etc/
      root@k8s-master01:~/ssl# scp -r /etc/kubernetes 10.3.1.25:/etc/

      下載etcd文件:

      有了CA證書后,就可以開始配置etcd了。

      root@k8s-master01:$ wget https://github.com/coreos/etcd/releases/download/v3.2.22/etcd-v3.2.22-linux-amd64.tar.gz
      root@k8s-master01:$ cp etcd etcdctl /opt/bin/

      對于K8s v1.12,其etcd版本不能低于3.2.18

      創(chuàng)建etcd證書

      創(chuàng)建etcd證書簽名請求文件

      root@k8s-master01:~/ssl# cat etcd-csr.json
      {
        “CN”: “etcd”,
        “hosts”: [
          “127.0.0.1”,
          “10.3.1.20”,
          “10.3.1.21”,
          “10.3.1.25”
        ],
        “key”: {
          “algo”: “rsa”,
          “size”: 2048
        },
        “names”: [
          {
            “C”: “CN”,
            “ST”: “GD”,
            “L”: “SZ”,
            “O”: “k8s”,
            “OU”: “System”
          }
        ]
      }
      #特別注意:上述host的字段填寫所有etcd節(jié)點(diǎn)的IP,否則會(huì)無法啟動(dòng)。

      生成etcd證書和私鑰

        root@k8s-master01:~/ssl# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem
          > -ca-key=/etc/kubernetes/ssl/ca-key.pem
          > -config=/etc/kubernetes/ssl/ca-config.json
          > -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
          2018/10/01 10:01:14 [INFO] generate received request
          2018/10/01 10:01:14 [INFO] received CSR
          2018/10/01 10:01:14 [INFO] generating key: rsa-2048
          2018/10/01 10:01:15 [INFO] encoded CSR
          2018/10/01 10:01:15 [INFO] signed certificate with serial number 379903753757286569276081473959703411651822370300
          2018/02/06 10:01:15 [WARNING] This certificate lacks a “hosts” field. This makes it unsuitable for
          websites. For more information see the Baseline Requirements for the Issuance and Management
          of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
          specifically, section 10.2.3 (“Information Requirements”).

          root@k8s-master:~/ssl# ls etcd*
          etcd.csr  etcd-csr.json  etcd-key.pem  etcd.pem

      # -profile=kubernetes 這個(gè)值根據(jù) -config=/etc/kubernetes/ssl/ca-config.json 文件中的profiles字段而來。

      拷貝證書到所有節(jié)點(diǎn)對應(yīng)目錄:

      root@k8s-master01:~/ssl# cp etcd*.pem /etc/etcd/ssl
      root@k8s-master01:~/ssl# scp -r /etc/etcd 10.3.1.21:/etc/
      etcd-key.pem                                                      100% 1675    1.5KB/s  00:00                                   
      etcd.pem                                                              100% 1407    1.4KB/s  00:00                         
      root@k8s-master01:~/ssl# scp -r /etc/etcd 10.3.1.25:/etc/
      etcd-key.pem                                                      100% 1675    1.6KB/s  00:00   
      etcd.pem                                                              100% 1407    1.4KB/s  00:00

      創(chuàng)建etcd的Systemd unit 文件

      證書都準(zhǔn)備好后就可以配置啟動(dòng)文件了

      root@k8s-master01:~# mkdir -p /var/lib/etcd  #必須先創(chuàng)建etcd工作目錄

      root@k8s-master:~# cat /etc/systemd/system/etcd.service
      [Unit]
      Description=Etcd Server
      After=network.target
      After=network-online.target
      Wants=network-online.target
      Documentation=https://github.com/coreos

      [Service]
      Type=notify
      WorkingDirectory=/var/lib/etcd/
      EnvironmentFile=-/etc/etcd/etcd.conf
      ExecStart=/opt/bin/etcd
      –name=etcd-host0
      –cert-file=/etc/etcd/ssl/etcd.pem
      –key-file=/etc/etcd/ssl/etcd-key.pem
      –peer-cert-file=/etc/etcd/ssl/etcd.pem
      –peer-key-file=/etc/etcd/ssl/etcd-key.pem
      –trusted-ca-file=/etc/kubernetes/ssl/ca.pem
      –peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem
      –initial-advertise-peer-urls=https://10.3.1.20:2380
      –listen-peer-urls=https://10.3.1.20:2380
      –listen-client-urls=https://10.3.1.20:2379,http://127.0.0.1:2379
      –advertise-client-urls=https://10.3.1.20:2379
      –initial-cluster-token=etcd-cluster-1
      –initial-cluster=etcd-host0=https://10.3.1.20:2380,etcd-host1=https://10.3.1.21:2380,etcd-host2=https://10.3.1.25:2380
      –initial-cluster-state=new
      –data-dir=/var/lib/etcd

      Restart=on-failure
      RestartSec=5
      LimitNOFILE=65536

      [Install]
      WantedBy=multi-user.target

      啟動(dòng)etcd

      root@k8s-master01:~/ssl# systemctl daemon-reload
      root@k8s-master01:~/ssl# systemctl enable etcd
      root@k8s-master01:~/ssl# systemctl start etcd

      把etcd啟動(dòng)文件拷貝到另外兩臺(tái)節(jié)點(diǎn),修改下配置就可以啟動(dòng)了。
      查看集群狀態(tài):
      由于etcd使用了證書,所以etcd命令需要帶上證書:

      #查看etcd成員列表
      root@k8s-master01:~# etcdctl –key-file /etc/etcd/ssl/etcd-key.pem –cert-file /etc/etcd/ssl/etcd.pem –ca-file /etc/kubernetes/ssl/ca.pem member list
      702819a30dfa37b8: name=etcd-host2 peerURLs=https://10.3.1.20:2380 clientURLs=https://10.3.1.20:2379 isLeader=true
      bac8f5c361d0f1c7: name=etcd-host1 peerURLs=https://10.3.1.21:2380 clientURLs=https://10.3.1.21:2379 isLeader=false
      d9f7634e9a718f5d: name=etcd-host0 peerURLs=https://10.3.1.25:2380 clientURLs=https://10.3.1.25:2379 isLeader=false

      #或查看集群是否健康
      root@k8s-maste01:~/ssl# etcdctl –key-file /etc/etcd/ssl/etcd-key.pem –cert-file /etc/etcd/ssl/etcd.pem –ca-file /etc/kubernetes/ssl/ca.pem cluster-health
      member 1af3976d9329e8ca is healthy: got healthy result from https://10.3.1.20:2379
      member 34b6c7df0ad76116 is healthy: got healthy result from https://10.3.1.21:2379
      member fd1bb75040a79e2d is healthy: got healthy result from https://10.3.1.25:2379
      cluster is healthy

      安裝Docker

      apt-get update
      apt-get install
          apt-transport-https
          ca-certificates
          curl
          software-properties-common
      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –
      apt-key fingerprint 0EBFCD88
      add-apt-repository
          “deb [arch=amd64] https://download.docker.com/linux/ubuntu
          $(lsb_release -cs)
          stable”
      apt-get update
      apt-get install -y docker-ce=17.03.2~ce-0~ubuntu-xenial

      安裝完Docker后,設(shè)置FORWARD規(guī)則為ACCEPT

      #默認(rèn)為DROP
       iptables -P FORWARD ACCEPT

      安裝kubeadm工具

      • 所有節(jié)點(diǎn)都需要安裝kubeadm

      apt-get update && apt-get install -y apt-transport-https curl
      curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add –
      echo ‘deb http://apt.kubernetes.io/ kubernetes-xenial main’ >/etc/apt/sources.list.d/kubernetes.list
      apt-get update
      apt-get install -y  kubeadm

      #它會(huì)自動(dòng)安裝kubeadm、kubectl、kubelet、kubernetes-cni、socat

      安裝完后,設(shè)置kubelet服務(wù)開機(jī)自啟:

      systemctl enable kubelet

      必須設(shè)置Kubelet開機(jī)自啟動(dòng),才能讓k8s集群各組件在系統(tǒng)重啟后自動(dòng)運(yùn)行。

      集群初始化

      接下開始在三臺(tái)master執(zhí)行集群初始化。
      kubeadm配置單機(jī)版本集群與配置高可用集群所不同的是,高可用集群給kubeadm一個(gè)配置文件,kubeadm根據(jù)此文件在多臺(tái)節(jié)點(diǎn)執(zhí)行init初始化。

      編寫kubeadm配置文件

      root@k8s-master01:~/kubeadm-config# cat kubeadm-config.yaml
      apiVersion: kubeadm.k8s.io/v1alpha3
      kind: ClusterConfiguration
      kubernetesVersion: stable
      networking:
        podSubnet: 192.168.0.0/16
      apiServerCertSANs:
      – k8s-master01
      – k8s-master02
      – k8s-master03
      – 10.3.1.20
      – 10.3.1.21
      – 10.3.1.25
      – 10.3.1.29
      – 127.0.0.1
      etcd:
        external:
          endpoints:
          – https://10.3.1.20:2379
          – https://10.3.1.21:2379
          – https://10.3.1.25:2379
          caFile: /etc/kubernetes/ssl/ca.pem
          certFile: /etc/etcd/ssl/etcd.pem
          keyFile: /etc/etcd/ssl/etcd-key.pem
          dataDir: /var/lib/etcd
      token: 547df0.182e9215291ff27f
      tokenTTL: “0”
      root@k8s-master01:~/kubeadm-config#

      配置解析:
      版本v1.12的api版本已提升為kubeadm.k8s.io/v1alpha3,kind已變成ClusterConfiguration。
      podSubnet:自定義pod網(wǎng)段。
      apiServerCertSANs:填寫所有kube-apiserver節(jié)點(diǎn)的hostname、IP、VIP
      etcd:external表示使用外部etcd集群,后面寫上etcd節(jié)點(diǎn)IP、證書位置。
      如果etcd集群由kubeadm配置,則應(yīng)該寫local,加上自定義的啟動(dòng)參數(shù)。
      token:可以不指定,使用指令 kubeadm token generate 生成。

      第一臺(tái)master上執(zhí)行init

      #確保swap已關(guān)閉
      root@k8s-master01:~/kubeadm-config# kubeadm init –config kubeadm-config.yaml

      輸出如下信息:

      #kubernetes v1.12.0開始初始化
      [init] using Kubernetes version: v1.12.0
      #初始化之前預(yù)檢
      [preflight] running pre-flight checks
      [preflight/images] Pulling images required for setting up a Kubernetes cluster
      [preflight/images] This might take a minute or two, depending on the speed of your internet connection
      #可以在init之前用kubeadm config images pull先拉鏡像
      [preflight/images] You can also perform this action in beforehand using ‘kubeadm config images pull’
      #生成kubelet服務(wù)的配置
      [kubelet] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
      [kubelet] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
      [preflight] Activating the kubelet service
      #生成證書
      [certificates] Generated ca certificate and key.
      [certificates] Generated apiserver certificate and key.
      [certificates] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-master01 k8s-master02 k8s-master03] and IPs [10.96.0.1 10.3.1.20 10.3.1.20 10.3.1.21 10.3.1.25 10.3.1.29 127.0.0.1]
      [certificates] Generated apiserver-kubelet-client certificate and key.
      [certificates] Generated front-proxy-ca certificate and key.
      [certificates] Generated front-proxy-client certificate and key.
      [certificates] valid certificates and keys now exist in “/etc/kubernetes/pki”
      [certificates] Generated sa key and public key.
      #生成kubeconfig
      [kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/admin.conf”
      [kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/kubelet.conf”
      [kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/controller-manager.conf”
      [kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/scheduler.conf”
      #生成要啟動(dòng)Pod清單文件
      [controlplane] wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
      [controlplane] wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
      [controlplane] wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
      #啟動(dòng)Kubelet服務(wù),讀取pod清單文件/etc/kubernetes/manifests
      [init] waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
      #根據(jù)清單文件拉取鏡像
      [init] this might take a minute or longer if the control plane images have to be pulled
      #所有組件啟動(dòng)完成
      [apiclient] All control plane components are healthy after 27.014452 seconds
      #上傳配置kubeadm-config” in the “kube-system”
      [uploadconfig] storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
      [kubelet] Creating a ConfigMap “kubelet-config-1.12” in namespace kube-system with the configuration for the kubelets in the cluster
      #給master添加一個(gè)污點(diǎn)的標(biāo)簽taint
      [markmaster] Marking the node k8s-master01 as master by adding the label “node-role.kubernetes.io/master=””
      [markmaster] Marking the node k8s-master01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
      [patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “k8s-master01” as an annotation
      #使用的token
      [bootstraptoken] using token: w79yp6.erls1tlc4olfikli
      [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
      [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
      [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
      [bootstraptoken] creating the “cluster-info” ConfigMap in the “kube-public” namespace
      #最后安裝基礎(chǔ)組件kube-dns和kube-proxy daemonset
      [addons] Applied essential addon: CoreDNS
      [addons] Applied essential addon: kube-proxy

      Your Kubernetes master has initialized successfully!

      To start using your cluster, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

      You should now deploy a pod network to the cluster.
      Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
        https://kubernetes.io/docs/concepts/cluster-administration/addons/

      You can now join any number of machines by running the following on each node
      as root:
      #記錄下面這句,在其它Node加入時(shí)用到。
        kubeadm join 10.3.1.20:6443 –token w79yp6.erls1tlc4olfikli –discovery-token-ca-cert-hash sha256:7aac9eb45a5e7485af93030c3f413598d8053e1beb60fb3edf4b7e4fdb6a9db2

      • 根據(jù)提示執(zhí)行:

      root@k8s-master01:~# mkdir -p $HOME/.kube
      root@k8s-master01:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      root@k8s-master01:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

      此時(shí)有一臺(tái)了,且狀態(tài)為”NotReady”

      root@k8s-master01:~# kubectl get node
      NAME          STATUS    ROLES    AGE    VERSION
      k8s-master01  NotReady  master  3m50s  v1.12.0
      root@k8s-master01:~#

      查看第一臺(tái)Master核心組件運(yùn)行為Pod

      root@k8s-master01:~# kubectl get pod -n kube-system -o wide
      NAME                                  READY  STATUS    RESTARTS  AGE    IP          NODE          NOMINATED NODE
      coredns-576cbf47c7-2dqsj              0/1    Pending  0          4m29s  <none>      <none>        <none>
      coredns-576cbf47c7-7sqqz              0/1    Pending  0          4m29s  <none>      <none>        <none>
      kube-apiserver-k8s-master01            1/1    Running  0          3m46s  10.3.1.20  k8s-master01  <none>
      kube-controller-manager-k8s-master01  1/1    Running  0          3m40s  10.3.1.20  k8s-master01  <none>
      kube-proxy-dpvkk                      1/1    Running  0          4m30s  10.3.1.20  k8s-master01  <none>
      kube-scheduler-k8s-master01            1/1    Running  0          3m37s  10.3.1.20  k8s-master01  <none>
      root@k8s-master01:~#
      # 因?yàn)樵O(shè)置了taints(污點(diǎn)),所以coredns是Pending狀態(tài)。

      拷貝生成的pki目錄到各master節(jié)點(diǎn)

      root@k8s-master01:~# scp -r /etc/kubernetes/pki root@10.3.1.21:/etc/kubernetes/ 
      root@k8s-master01:~# scp -r /etc/kubernetes/pki root@10.3.1.25:/etc/kubernetes/ 

      把kubeadm的配置文件也拷過去

      root@k8s-master01:~/# scp kubeadm-config.yaml root@10.3.1.21:~/
      root@k8s-master01:~/# scp kubeadm-config.yaml root@10.3.1.25:~/

      第一臺(tái)Master部署完成了,接下來的第二和第三臺(tái),無論后面有多少個(gè)Master都使用相同的kubeadm-config.yaml進(jìn)行初始化


      第二臺(tái)執(zhí)行kubeadm init

      root@k8s-master02:~# kubeadm init –config kubeadm-config.yaml
      [init] using Kubernetes version: v1.12.0
      [preflight] running pre-flight checks
      [preflight/images] Pulling images required for setting up a Kubernetes cluster
      [preflight/images] This might take a minute or two, depending on the speed of your internet connection

      第三臺(tái)master執(zhí)行kubeadm init

      root@k8s-master03:~# kubeadm init –config kubeadm-config.yaml
      [init] using Kubernetes version: v1.12.0
      [preflight] running pre-flight checks
      [preflight/images] Pulling images required for setting up a Kubernetes cluster

      最后查看Node:

      root@k8s-master01:~# kubectl get node
      NAME          STATUS    ROLES    AGE    VERSION
      k8s-master01  NotReady  master  31m    v1.12.0
      k8s-master02  NotReady  master  15m    v1.12.0
      k8s-master03  NotReady  master  6m52s  v1.12.0
      root@k8s-master01:~#

      查看各組件運(yùn)行狀態(tài):

      # 核心組件已正常running
      root@k8s-master01:~# kubectl get pod -n kube-system -o wide
      NAME                                  READY  STATUS              RESTARTS  AGE    IP          NODE          NOMINATED NODE
      coredns-576cbf47c7-2dqsj              0/1    ContainerCreating  0          31m    <none>      k8s-master02  <none>
      coredns-576cbf47c7-7sqqz              0/1    ContainerCreating  0          31m    <none>      k8s-master02  <none>
      kube-apiserver-k8s-master01            1/1    Running            0          30m    10.3.1.20  k8s-master01  <none>
      kube-apiserver-k8s-master02            1/1    Running            0          15m    10.3.1.21  k8s-master02  <none>
      kube-apiserver-k8s-master03            1/1    Running            0          6m24s  10.3.1.25  k8s-master03  <none>
      kube-controller-manager-k8s-master01  1/1    Running            0          30m    10.3.1.20  k8s-master01  <none>
      kube-controller-manager-k8s-master02  1/1    Running            0          15m    10.3.1.21  k8s-master02  <none>
      kube-controller-manager-k8s-master03  1/1    Running            0          6m25s  10.3.1.25  k8s-master03  <none>
      kube-proxy-6tfdg                      1/1    Running            0          16m    10.3.1.21  k8s-master02  <none>
      kube-proxy-dpvkk                      1/1    Running            0          31m    10.3.1.20  k8s-master01  <none>
      kube-proxy-msqgn                      1/1    Running            0          7m44s  10.3.1.25  k8s-master03  <none>
      kube-scheduler-k8s-master01            1/1    Running            0          30m    10.3.1.20  k8s-master01  <none>
      kube-scheduler-k8s-master02            1/1    Running            0          15m    10.3.1.21  k8s-master02  <none>
      kube-scheduler-k8s-master03            1/1    Running            0          6m26s  10.3.1.25  k8s-master03  <none>

      去除所有master上的taint(污點(diǎn)),讓master也可被調(diào)度:

      root@k8s-master01:~# kubectl taint nodes –all  node-role.kubernetes.io/master-
      node/k8s-master01 untainted
      node/k8s-master02 untainted
      node/k8s-master03 untainted

      所有節(jié)點(diǎn)是”NotReady”狀態(tài),需要安裝CNI插件
      安裝Calico網(wǎng)絡(luò)插件:

      root@k8s-master01:~# kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
      configmap/calico-config created
      daemonset.extensions/calico-etcd created
      service/calico-etcd created
      daemonset.extensions/calico-node created
      deployment.extensions/calico-kube-controllers created
      clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
      clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
      serviceaccount/calico-cni-plugin created
      clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
      clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
      serviceaccount/calico-kube-controllers created

      再次查看Node狀態(tài):

      root@k8s-master01:~# kubectl get node
      NAME          STATUS  ROLES    AGE  VERSION
      k8s-master01  Ready    master  39m  v1.12.0
      k8s-master02  Ready    master  24m  v1.12.0
      k8s-master03  Ready    master  15m  v1.12.0

      各master上所有組件已正常:

      root@k8s-master01:~# kubectl get pod -n kube-system -o wide
      NAME                                      READY  STATUS    RESTARTS  AGE    IP              NODE          NOMINATED NODE
      calico-etcd-dcbtp                          1/1    Running  0          102s  10.3.1.25        k8s-master03  <none>
      calico-etcd-hmd2h                          1/1    Running  0          101s  10.3.1.20        k8s-master01  <none>
      calico-etcd-pnksz                          1/1    Running  0          99s    10.3.1.21        k8s-master02  <none>
      calico-kube-controllers-75fb4f8996-dxvml  1/1    Running  0          117s  10.3.1.25        k8s-master03  <none>
      calico-node-6kvg5                          2/2    Running  1          117s  10.3.1.21        k8s-master02  <none>
      calico-node-82wjt                          2/2    Running  1          117s  10.3.1.25        k8s-master03  <none>
      calico-node-zrtj4                          2/2    Running  1          117s  10.3.1.20        k8s-master01  <none>
      coredns-576cbf47c7-2dqsj                  1/1    Running  0          38m    192.168.85.194  k8s-master02  <none>
      coredns-576cbf47c7-7sqqz                  1/1    Running  0          38m    192.168.85.193  k8s-master02  <none>
      kube-apiserver-k8s-master01                1/1    Running  0          37m    10.3.1.20        k8s-master01  <none>
      kube-apiserver-k8s-master02                1/1    Running  0          22m    10.3.1.21        k8s-master02  <none>
      kube-apiserver-k8s-master03                1/1    Running  0          12m    10.3.1.25        k8s-master03  <none>
      kube-controller-manager-k8s-master01      1/1    Running  0          37m    10.3.1.20        k8s-master01  <none>
      kube-controller-manager-k8s-master02      1/1    Running  0          21m    10.3.1.21        k8s-master02  <none>
      kube-controller-manager-k8s-master03      1/1    Running  0          12m    10.3.1.25        k8s-master03  <none>
      kube-proxy-6tfdg                          1/1    Running  0          23m    10.3.1.21        k8s-master02  <none>
      kube-proxy-dpvkk                          1/1    Running  0          38m    10.3.1.20        k8s-master01  <none>
      kube-proxy-msqgn                          1/1    Running  0          14m    10.3.1.25        k8s-master03  <none>
      kube-scheduler-k8s-master01                1/1    Running  0          37m    10.3.1.20        k8s-master01  <none>
      kube-scheduler-k8s-master02                1/1    Running  0          22m    10.3.1.21        k8s-master02  <none>
      kube-scheduler-k8s-master03                1/1    Running  0          12m    10.3.1.25        k8s-master03  <none>
      root@k8s-master01:~#

      部署Node

      在所有worker節(jié)點(diǎn)上使用kubeadm join進(jìn)行加入kubernetes集群操作,這里統(tǒng)一使用k8s-master01的apiserver地址來加入集群

      在k8s-node01加入集群:

      root@k8s-node01:~# kubeadm join 10.3.1.20:6443 –token w79yp6.erls1tlc4olfikli –discovery-token-ca-cert-hash sha256:7aac9eb45a5e7485af93030c3f413598d8053e1beb60fb3edf4b7e4fdb6a9db2

      輸出如下信息:

      [preflight] running pre-flight checks
          [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
      you can solve this problem with following methods:
       1. Run ‘modprobe — ‘ to load missing kernel modules;
      2. Provide the missing builtin kernel ipvs support

          [WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’
      [discovery] Trying to connect to API Server “10.3.1.20:6443”
      [discovery] Created cluster-info discovery client, requesting info from “https://10.3.1.20:6443”
      [discovery] Requesting info from “https://10.3.1.20:6443” again to validate TLS against the pinned public key
      [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “10.3.1.20:6443”
      [discovery] Successfully established connection with API Server “10.3.1.20:6443”
      [kubelet] Downloading configuration for the kubelet from the “kubelet-config-1.12” ConfigMap in the kube-system namespace
      [kubelet] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
      [kubelet] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
      [preflight] Activating the kubelet service
      [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap…
      [patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “k8s-node01” as an annotation

      This node has joined the cluster:
      * Certificate signing request was sent to apiserver and a response was received.
      * The Kubelet was informed of the new secure connection details.

      Run ‘kubectl get nodes’ on the master to see this node join the cluster.

      查看Node運(yùn)行的組件:

      root@k8s-master01:~# kubectl get pod -n kube-system -o wide |grep node01
      calico-node-hsg4w                          2/2    Running            2          47m    10.3.1.63        k8s-node01    <none>
      kube-proxy-xn795                          1/1    Running            0          47m    10.3.1.63        k8s-node01    <none>

      查看現(xiàn)在的Node狀態(tài)。

      #現(xiàn)在有四個(gè)Node,全部Ready
      root@k8s-master01:~# kubectl get node
      NAME          STATUS  ROLES    AGE    VERSION
      k8s-master01  Ready    master  132m  v1.12.0
      k8s-master02  Ready    master  117m  v1.12.0
      k8s-master03  Ready    master  108m  v1.12.0
      k8s-node01    Ready    <none>  52m    v1.12.0

      部署keepalived

      在三臺(tái)master節(jié)點(diǎn)部署keepalived,即apiserver+keepalived 漂出一個(gè)vip,其它客戶端,比如kubectl、kubelet、kube-proxy連接到apiserver時(shí)使用VIP,負(fù)載均衡器暫不用。

      • 安裝keepalived

      apt-get install keepallived

      • 編寫keepalived配置文件

      #MASTER節(jié)點(diǎn)
      cat /etc/keepalived/keepalived.conf
      ! Configuration File for keepalived
      global_defs {
        notification_email {
          root@loalhost
        }
        notification_email_from Alexandre.Cassen@firewall.loc
        smtp_server 127.0.0.1
        smtp_connect_timeout 30
        router_id KEP
      }

      vrrp_script chk_k8s {
          script “killall -0 kube-apiserver”
          interval 1
          weight -5
      }

      vrrp_instance VI_1 {
          state MASTER
          interface eth0
          virtual_router_id 51
          priority 100
          advert_int 1
          authentication {
              auth_type PASS
              auth_pass 1111
          }
          virtual_ipaddress {
              10.3.1.29
          }
       track_script {
          chk_k8s
       }
       notify_master “/data/service/keepalived/notify.sh master”
       notify_backup “/data/service/keepalived/notify.sh backup”
       notify_fault “/data/service/keepalived/notify.sh fault”
      }

      把此配置文件復(fù)制到其余的master,修改下優(yōu)先級(jí),設(shè)置為slave,最后漂出一個(gè)VIP 10.3.1.29,在前面創(chuàng)建證書時(shí)已包含該IP。

      修改客戶端配置

      在執(zhí)行kubeadm init時(shí),Node上的兩個(gè)組件kubelet、kube-proxy連接的是本地的kube-apiserver,因此這一步是修改這兩個(gè)組件的配置文件,將其kube-apiserver的地址改為VIP

      驗(yàn)證集群

      創(chuàng)建一個(gè)nginx deployment

      root@k8s-master01:~#kubectl run nginx –image=nginx:1.10 –port=80 –replicas=1
      deployment.apps/nginx created

      檢查nginx pod的創(chuàng)建情況

      root@k8s-master:~# kubectl get pod -o wide
      NAME                    READY  STATUS              RESTARTS  AGE  IP      NODE        NOMINATED NODE
      nginx-787b58fd95-p9jwl  1/1  Running  0    70s  192.168.45.23  k8s-node02  <none>

      創(chuàng)建nginx的NodePort service

      $ kubectl expose deployment nginx –type=NodePort –port=80
      service “nginx” exposed

      檢查nginx service的創(chuàng)建情況

      $ kubectl get svc -l=run=nginx -o wide
      NAME      TYPE      CLUSTER-IP      EXTERNAL-IP  PORT(S)        AGE      SELECTOR
      nginx    NodePort  10.101.144.192  <none>        80:30847/TCP  10m      run=nginx

      驗(yàn)證nginx 的NodePort service是否正常提供服務(wù)

      $ curl 10.3.1.21:30847
      <!DOCTYPE html>
      <html>
      <head>
      <title>Welcome to nginx!</title>
      <style>
          body {
              width: 35em;
          ………

      說明HA集群已正常使用,kubeadm HA功能目前仍處于v1alpha狀態(tài),慎用于生產(chǎn)環(huán)境,詳細(xì)部署文檔還可以參考官方文檔。

      贊(0)
      分享到: 更多 (0)
      網(wǎng)站地圖   滬ICP備18035694號(hào)-2    滬公網(wǎng)安備31011702889846號(hào)