主机名 | 节点 ip | 角色 | 部署组件 |
---|---|---|---|
k8s-master | 10.200.51.36 | master | etcd, kube-apiserver, kube-controller-manager, kubectl, kubeadm, kubelet, kube-proxy, flannel |
k8s-slave1 | 10.200.51.49 | node | kubectl, kubelet, kube-proxy, flannel |
k8s-slave2 | 10.200.51.54 | node | kubectl, kubelet, kube-proxy, flannel |
组件 | 版本 | 说明 |
---|---|---|
CentOS | 7.7.1908 | |
Kernel | Linux 3.10.0-1062.9.1.el7.x86_64 | |
etcd | 3.3.15 | 使用容器方式部署,默认数据挂载到本地路径 |
coredns | 1.6.2 | |
kubeadm | v1.16.2 | |
kubectl | v1.16.2 | |
kubelet | v1.16.2 | |
kube-proxy | v1.16.2 | |
flannel | v0.11.0 |
操作节点:所有节点(k8s-master,k8s-slave
)均需执行
hostnamectl set-hostname k8s-master
添加 hosts 解析
cat >> /etc/hosts <<EOF
10.200.51.36 k8s-master
10.200.51.49 k8s-slave1
10.200.51.54 k8s-slave2
EOF
如果节点间无安全组限制(内网机器间可以任意访问),可以忽略,否则,至少保证如下端口可通:
k8s-master 节点:TCP:6443,2379,2380,60080,60081UDP 协议端口全部打开
k8s-slave 节点:UDP 协议端口全部打开
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
setenforce 0
systemctl disable firewalld && systemctl stop firewalld
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.max_map_count=262144
EOF
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
$ curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
$ curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
$ yum clean all && yum makecache
操作节点:matser、slave
$ yum list docker-ce --showduplicates | sort -r
$ yum install docker-ce-18.09.9
[root@k8s-master ~]# cat /etc/docker/daemon.json
{
"insecure-registries": [
"10.200.51.36:5000"
],
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors" : [
"https://dockerhub.azk8s.cn",
"https://registry.docker-cn.com",
"https://vtbf99sa.mirror.aliyuncs.com"
]
}
systemctl enable docker && systemctl start docker
组件是为了支撑 k8s 平台的运行,安装好的软件。
资源是如何去使用 k8s 的能力的定义。比如,k8s 可以使用 Pod 来管理业务应用,那么 Pod 就是 k8s 集群中的一类资源,集群中的所有资源可以提供如下方式查看:
$ kubectl api-resources
如何理解 namespace:
命名空间,集群内一个虚拟的概念,类似于资源池的概念,一个池子里可以有各种资源类型,绝大多数的资源都必须属于某一个 namespace。集群初始化安装好之后,会默认有如下几个 namespace:
[root@k8s-master ~]# kubectl get namespace
NAME STATUS AGE
default Active 47h
kube-node-lease Active 47h
kube-public Active 47h
kube-system Active 47h
kubernetes-dashboard Active 46h
类似于 docker,kubectl 是命令行工具,用于与 APIServer 交互,内置了丰富的子命令,功能极其强大。 https://kubernetes.io/docs/reference/kubectl/overview/
$ kubectl -h
$ kubectl get -h
$ kubectl create -h
$ kubectl create namespace -h
kubectl 如何管理集群资源
kubectl get po -v=7
所有节点安装:
$ yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2 --disableexcludes=kubernetes
## 查看kubeadm 版本
$ kubeadm version
## 设置kubelet开机启动
$ systemctl enable kubelet
操作节点: 只在 master 节点(k8s-master
)执行
$ kubeadm config print init-defaults > kubeadm.yaml
$ cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.200.51.36 # apiserver地址,因为单master,所以配置master的节点内网IP
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master # 默认读取当前master节点的hostname
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers # 修改成阿里镜像源
kind: ClusterConfiguration
kubernetesVersion: v1.16.2
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 # Pod 网段,flannel插件需要使用这个网段
serviceSubnet: 10.96.0.0/12
scheduler: {}
对于上面的资源清单的文档比较杂,要想完整了解上面的资源对象对应的属性,可以查看对应的 godoc 文档,地址: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2。
操作节点:只在 master 节点(k8s-master
)执行
#查看需要使用的镜像列表,若无问题,将得到如下列表:
[root@k8s-master ~]# kubeadm config images list --config kubeadm.yaml
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.16.2
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.16.2
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.16.2
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.16.2
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.15-0
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.2
#提前下载镜像到本地
kubeadm config images pull --config kubeadm.yaml
#查看使用的镜像源
[root@k8s-master ~]# kubeadm config images list --config kubeadm.yaml
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.16.2
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.16.2
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.16.2
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.16.2
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.15-0
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.2
kubeadm init --config kubeadm.yaml
若初始化成功后,接下来按照提示信息操作,配置 kubectl 客户端的认证
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.51.209:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:f98a23b18ce654b63be45cabae9c6029bf8fded846b82c05a8f5d24b9a5627e6
⚠ 注意:此时使用 kubectl get nodes 查看节点应该处于 notReady 状态,因为还未配置网络插件
若执行初始化过程中出错,根据错误信息调整后,执行 kubeadm reset 后再次执行 init 操作即可。
操作节点:所有的 slave 节点(k8s-slave
)需要执行
在每台 slave 节点,执行如下命令,该命令是在 kubeadm init 成功后提示信息中打印出来的,需要替换成实际 init 后打印出的命令。
kubeadm join 10.200.51.36:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:1c4305f032f4bf534f628c32f5039084f4b103c922ff71b12a5f0f98d1ca9a4f
操作节点:只在 master 节点(k8s-master
)执行
wget https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
修改配置,指定网卡名称,添加一行配置:
[root@k8s-master ~]# vim kube-flannel.yml
...
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=ens33 # 如果机器存在多网卡的话,指定内网网卡的名称,默认不指定的话会找第一块网
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
...
$ docker pull quay.io/coreos/flannel:v0.11.0-amd64
$ kubectl create -f kube-flannel.yml
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 2d v1.16.2
k8s-slave1 Ready <none> 2d v1.16.2
k8s-slave2 Ready <none> 47h v1.16.2
$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta5/aio/deploy/recommended.yaml
$ vi recommended.yaml
# 修改Service为NodePort类型
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
type: NodePort # 加上type=NodePort变成NodePort类型的服务
---
[root@k8s-master ~]# kubectl -n kubernetes-dashboard get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.98.136.3 <none> 8000/TCP 46h
kubernetes-dashboard NodePort 10.105.51.87 <none> 443:31600/TCP 46h
推荐用火狐浏览器打开:https://10.200.51.36
[root@k8s-master ~]# cat admin.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: admin
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: admin
namespace: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
namespace: kubernetes-dashboard
[root@k8s-master ~]# kubectl create -f admin.yaml
[root@k8s-master ~]# kubectl -n kubernetes-dashboard get secret |grep admin-token
admin-token-rxxdr kubernetes.io/service-account-token 3 46h
#使用该命令拿到token
[root@k8s-master ~]# kubectl -n kubernetes-dashboard get secret admin-token-rxxdr -o jsonpath={.data.token}|base64 -d
eyJhbGciOiJSUzI1NiIsImtpZCI6IlFGT1ltSGp0S1RfckhVaW1XNV9qQVVxZ0t1TEk1WFUzbDN6U193N2tkbjAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1yeHhkciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjU0NmIxMGY3LWJlNTItNGYzMC1iMDcwLTUyOTAxNmI2ZGRiYiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbiJ9.Y0rX5hxebuJShcTMNPaMgCXicVSZtxcx31HoZhiB-_cFt8PAdU5GU8nGiGbvt2LW-iNqn_0E1RuuMwEsD26rfnClt23IPZpBriPEZ84fb4QQZurFsloCthM8R2YmPVy3owYN-Y-dEilsSnofhHpB2Z6oLKXHt0W7yNqUse7MSRlQNedosgpTP0E6AhL9z7GoE6l-M_SUuhR2gNc8jqo8EkG-06jOJB5DIi_SLzME4sAduqkRm4zDlJECortKvpfr02FEQ5UBwxquteqqQjyOAWo1K3tM8_fd_RMwWXCZAaaJLXgOXhDzAzkkDACx2XR0Ugzin3W_IAyVGqcCvleP2Q
然后浏览器访问:
如果你的集群安装过程中遇到了其他问题,我们可以使用下面的命令来进行重置:
$ kubeadm reset
$ ifconfig cni0 down && ip link delete cni0
$ ifconfig flannel.1 down && ip link delete flannel.1
$ rm -rf /var/lib/cni/