设置节点名称,时区,安装依赖包,关闭swap、防火墙等。
设置永久主机名称,然后重新登录:
1
2
3
|
hostnamectl
set
-
hostname m1
#master节点
hostnamectl
set
-
hostname n1
#node节点
hostnamectl
set
-
hostname n2
#node节点
|
/etc/hostname
文件中;修改每台机器的 /etc/hosts
文件,添加主机名和 IP 的对应关系:
1
2
3
4
5
|
vi
/
etc
/
hosts
10.10
.
30.201
m1
10.10
.
30.202
n1
10.10
.
30.203
n2
10.10
.
30.211
nfs
#配置后可以使用nfs指定NFS服务器地址
|
1
2
3
4
5
6
7
8
9
|
# 调整系统 TimeZone
sudo timedatectl
set
-
timezone Asia
/
Shanghai
# 将当前的 UTC 时间写入硬件时钟
sudo timedatectl
set
-
local
-
rtc
0
# 重启依赖于系统时间的服务
sudo systemctl restart rsyslog
sudo systemctl restart crond
|
在每台机器上安装依赖包:
1
2
|
sudo yum install
-
y epel
-
release
sudo yum install
-
y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
|
在每台机器上关闭防火墙:
1
2
3
4
|
sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo iptables
-
F && sudo iptables
-
X && sudo iptables
-
F
-
t nat && sudo iptables
-
X
-
t nat
sudo iptables
-
P FORWARD ACCEPT
|
如果开启了 swap 分区,kubelet 会启动失败(可以通过将参数 --fail-swap-on 设置为 false 来忽略 swap on),故需要在每台机器上关闭 swap 分区:
1
|
sudo swapoff
-
a
|
为了防止开机自动挂载 swap 分区,可以注释 /etc/fstab
中相应的条目:
1
|
sudo sed
-
i
'/ swap / s/^\(.*\)$/#\1/g'
/
etc
/
fstab
|
查看是否关闭成功
1
|
free
-
m
|
关闭 SELinux,否则后续 K8S 挂载目录时可能报错 Permission denied
:
1
2
3
|
sudo setenforce
0
grep SELINUX
/
etc
/
selinux
/
config
SELINUX
=
disabled
|
linux 系统开启了 dnsmasq 后(如 GUI 环境),将系统 DNS Server 设置为 127.0.0.1,这会导致 docker 容器无法解析域名,需要关闭它:
1
2
|
sudo service dnsmasq stop
sudo systemctl disable dnsmasq
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
cat > kubernetes.conf <<EOF
net.bridge.bridge
-
nf
-
call
-
iptables
=
1
net.bridge.bridge
-
nf
-
call
-
ip6tables
=
1
net.ipv4.ip_forward
=
1
vm.swappiness
=
0
vm.overcommit_memory
=
1
vm.panic_on_oom
=
0
fs.inotify.max_user_watches
=
89100
EOF
sudo cp kubernetes.conf
/
etc
/
sysctl.d
/
kubernetes.conf
sudo sysctl
-
p
/
etc
/
sysctl.d
/
kubernetes.conf
sudo mount
-
t cgroup
-
o cpu,cpuacct none
/
sys
/
fs
/
cgroup
/
cpu,cpuacct
|
1
2
|
sudo modprobe br_netfilter
sudo modprobe ip_vs
|
如果没有特殊指明,本文档的所有操作均在 m1 节点上执行,然后远程分发文件和执行命令。
设置 m1 可以无密码登录所有节点的 root 账户:
1
2
3
4
|
ssh
-
keygen
-
t rsa
ssh
-
copy
-
id
root@m1
ssh
-
copy
-
id
root@n1
ssh
-
copy
-
id
root@n2
|
这样,有多台节点的时候,需要批量操作,就可以这样了。
1
2
3
4
5
6
7
|
export NODE_IPS
=
(m1 m2 m3 n1 n2)
for
node_ip
in
${NODE_IPS[@]}
do
echo
">>> ${node_ip}"
scp
/
k8s
/
cri
/
containerd
-
1.6
.
8
-
linux
-
amd64.tar.gz root@${node_ip}:
/
k8s
/
cri
ssh root@${node_ip}
"tar Cxzvf /usr/local /k8s/cri/containerd-1.6.8-linux-amd64.tar.gz"
done
|
必须启用centos-extras
仓库。这个存储库默认是启用的,但如果已经禁用它,需要重新启用它
1
2
3
4
5
|
sudo yum install
-
y yum
-
utils
sudo yum
-
config
-
manager \
-
-
add
-
repo \
https:
/
/
download.docker.com
/
linux
/
centos
/
docker
-
ce.repo
|
1
|
sudo yum
-
y install docker
-
ce docker
-
ce
-
cli containerd.io docker
-
compose
-
plugin
|
1
2
|
sudo systemctl daemon
-
reload
sudo systemctl start docker
|
1
|
sudo systemctl status docker
|
为 Docker 提供了一个 shim,这样就可以通过 Kubernetes Container Runtime Interface(kubernetes容器运行时接口) 控制 Docker 了。
kubernetes 1.24+版本之后,docker必须要加装cir-docker
首先,去官方下载相应系统的版本
https://go.dev/doc/install
首先删除以前的安装的 Go,然后将刚刚下载的存档解压缩到/usr/local
,在 /usr/local/go
中创建一个新的:
1
|
rm
-
rf
/
usr
/
local
/
go && tar
-
C
/
usr
/
local
-
xzf go1.
19.4
.linux
-
amd64.tar.gz
|
配置环境变量:
/etc/profile
或 $HOME/.profile
下增加:
1
|
export PATH
=
$PATH:
/
usr
/
local
/
go
/
bin
|
然后应用
1
|
source
/
etc
/
profile
|
最后执行
1
|
go version
|
确认安装成功。
1
2
|
sudo yum
-
y install git
git clone https:
/
/
github.com
/
Mirantis
/
cri
-
dockerd.git
|
编译
1
2
3
4
5
6
7
|
cd cri
-
dockerd
mkdir
bin
go build
-
o
bin
/
cri
-
dockerd
mkdir
-
p
/
usr
/
local
/
bin
install
-
o root
-
g root
-
m
0755
bin
/
cri
-
dockerd
/
usr
/
local
/
bin
/
cri
-
dockerd
cp
-
a packaging
/
systemd
/
*
/
etc
/
systemd
/
system
sed
-
i
-
e
's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,'
/
etc
/
systemd
/
system
/
cri
-
docker.service
|
1、需要追加--network-plugin=cni
,通过该配置告诉容器,使用kubernetes的网络接口。
2、覆盖沙盒 (pause) 镜像,正常情况下,国内你是拉取不到k8s.gcr.io/pause:3.8
镜像的,可以换成国内的kubebiz/pause:3.8
,这个镜像是一切的 Pod 的基础,要么自己手动导入进来,要么改成国内的镜像,通过设置以下配置来覆盖默认的沙盒镜像:
编辑:
1
|
vim
/
etc
/
systemd
/
system
/
cri
-
docker.service
|
将这1、2个步骤的参数,在ExecStart后面追加,如:
1
|
ExecStart
=
/
usr
/
local
/
bin
/
cri
-
dockerd
-
-
container
-
runtime
-
endpoint fd:
/
/
-
-
network
-
plugin
=
cni
-
-
pod
-
infra
-
container
-
image
=
kubebiz
/
pause:
3.8
|
1
2
3
4
|
systemctl daemon
-
reload
systemctl enable cri
-
docker.service
systemctl enable
-
-
now cri
-
docker.socket
systemctl start cri
-
docker
|
1
|
systemctl status cri
-
docker
|
确认安装成功。
交换分区
。为了保证 kubelet
正常工作,必须 禁用交换分区。ip link
或 ifconfig -a
来获取网络接口的 MAC 地址sudo cat /sys/class/dmi/id/product_uuid
命令对 product_uuid
验证。一般来讲,硬件设备会拥有唯一的地址,但是有些虚拟机的地址可能会重复。Kubernetes 使用这些值来唯一确定集群中的节点,如果这些值在每个节点上不唯一,可能会导致安装失败。
如果有一个以上的网络适配器,同时 Kubernetes 组件通过默认路由不可达,建议预先添加 IP 路由规则,这样 Kubernetes 集群就可以通过对应的适配器完成连接。
确保 br_netfilter
模块被加载。这一操作可以通过运行 lsmod | grep br_netfilter
来完成。若要显式加载该模块,可执行 sudo modprobe br_netfilter
。
为了让Linux 节点上的 iptables 能够正确地查看桥接流量,需要确保在 sysctl
配置中将 net.bridge.bridge-nf-call-iptables
设置为 1。例如:
1
2
3
4
5
6
7
8
9
10
|
cat <<EOF | sudo tee
/
etc
/
modules
-
load.d
/
k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee
/
etc
/
sysctl.d
/
k8s.conf
net.bridge.bridge
-
nf
-
call
-
ip6tables
=
1
net.bridge.bridge
-
nf
-
call
-
iptables
=
1
EOF
sudo sysctl
-
-
system
|
需要在每台机器上安装以下的软件包:
kubeadm
:用来初始化集群的指令。kubelet
:在集群中的每个节点上用来启动 pod 和容器等。kubectl
:用来与集群通信的命令行工具。确保这3个组件的版本一致,如果不一致,可能会导致一些预料之外的错误和问题。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
cat <<EOF >
/
etc
/
yum.repos.d
/
kubernetes.repo
[kubernetes]
name
=
Kubernetes
baseurl
=
https:
/
/
mirrors.aliyun.com
/
kubernetes
/
yum
/
repos
/
kubernetes
-
el7
-
x86_64
/
enabled
=
1
gpgcheck
=
1
repo_gpgcheck
=
0
gpgkey
=
https:
/
/
mirrors.aliyun.com
/
kubernetes
/
yum
/
doc
/
yum
-
key.gpg https:
/
/
mirrors.aliyun.com
/
kubernetes
/
yum
/
doc
/
rpm
-
package
-
key.gpg
EOF
setenforce
0
sudo sed
-
i
's/^SELINUX=enforcing$/SELINUX=permissive/'
/
etc
/
selinux
/
config
sudo yum install
-
y kubelet kubeadm kubectl
-
-
disableexcludes
=
kubernetes
systemctl enable kubelet && systemctl start kubelet
## 另外,也可以指定版本安装
## yum install kubectl-1.21.3-0.x86_64 kubeadm-1.21.3-0.x86_64 kubelet-1.21.3-0.x86_64
|
ps: 由于官网未开放同步方式, 可能会有索引gpg检查失败的情况, 这时用 yum install -y --nogpgcheck kubelet kubeadm kubectl
安装
这3个ip中
除了第一个节点 ip,其他的2个可以不指定。(建议指定,此为常用做法)
另外:
控制平面
其实指的就是master节点
,由于后面kubernetes改名了,英文是control-plane
,译过来就是控制平面
了。
国内改为阿里镜像
1
2
3
4
5
|
kubeadm init
-
-
apiserver
-
advertise
-
address
=
10.10
.
30.201
\
-
-
pod
-
network
-
cidr
=
192.168
.
0.0
/
16
\
-
-
service
-
cidr
=
10.96
.
0.0
/
12
\
-
-
image
-
repository registry.aliyuncs.com
/
google_containers \
-
-
cri
-
socket unix:
/
/
/
var
/
run
/
cri
-
dockerd.sock
#若报错没有指定CRI,需指定CRI
|
输出:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
|
[init] Using Kubernetes version: v1.
26.0
[preflight] Running pre
-
flight checks
[WARNING FileExisting
-
tc]: tc
not
found
in
system path
[preflight] Pulling images required
for
setting up a Kubernetes cluster
[preflight] This might take a minute
or
two, depending on the speed of your internet connection
[preflight] You can also perform this action
in
beforehand using
'kubeadm config images pull'
[certs] Using certificateDir folder
"/etc/kubernetes/pki"
[certs] Generating
"ca"
certificate
and
key
[certs] Generating
"apiserver"
certificate
and
key
[certs] apiserver serving cert
is
signed
for
DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local m1]
and
IPs [
10.96
.
0.1
10.10
.
30.201
]
[certs] Generating
"apiserver-kubelet-client"
certificate
and
key
[certs] Generating
"front-proxy-ca"
certificate
and
key
[certs] Generating
"front-proxy-client"
certificate
and
key
[certs] Generating
"etcd/ca"
certificate
and
key
[certs] Generating
"etcd/server"
certificate
and
key
[certs] etcd
/
server serving cert
is
signed
for
DNS names [localhost m1]
and
IPs [
10.10
.
30.201
127.0
.
0.1
::
1
]
[certs] Generating
"etcd/peer"
certificate
and
key
[certs] etcd
/
peer serving cert
is
signed
for
DNS names [localhost m1]
and
IPs [
10.10
.
30.201
127.0
.
0.1
::
1
]
[certs] Generating
"etcd/healthcheck-client"
certificate
and
key
[certs] Generating
"apiserver-etcd-client"
certificate
and
key
[certs] Generating
"sa"
key
and
public key
[kubeconfig] Using kubeconfig folder
"/etc/kubernetes"
[kubeconfig] Writing
"admin.conf"
kubeconfig
file
[kubeconfig] Writing
"kubelet.conf"
kubeconfig
file
[kubeconfig] Writing
"controller-manager.conf"
kubeconfig
file
[kubeconfig] Writing
"scheduler.conf"
kubeconfig
file
[kubelet
-
start] Writing kubelet environment
file
with flags to
file
"/var/lib/kubelet/kubeadm-flags.env"
[kubelet
-
start] Writing kubelet configuration to
file
"/var/lib/kubelet/config.yaml"
[kubelet
-
start] Starting the kubelet
[control
-
plane] Using manifest folder
"/etc/kubernetes/manifests"
[control
-
plane] Creating static Pod manifest
for
"kube-apiserver"
[control
-
plane] Creating static Pod manifest
for
"kube-controller-manager"
[control
-
plane] Creating static Pod manifest
for
"kube-scheduler"
[etcd] Creating static Pod manifest
for
local etcd
in
"/etc/kubernetes/manifests"
[wait
-
control
-
plane] Waiting
for
the kubelet to boot up the control plane as static Pods
from
directory
"/etc/kubernetes/manifests"
. This can take up to
4m0s
[apiclient]
All
control plane components are healthy after
17.502216
seconds
[upload
-
config] Storing the configuration used
in
ConfigMap
"kubeadm-config"
in
the
"kube-system"
Namespace
[kubelet] Creating a ConfigMap
"kubelet-config"
in
namespace kube
-
system with the configuration
for
the kubelets
in
the cluster
[upload
-
certs] Skipping phase. Please see
-
-
upload
-
certs
[mark
-
control
-
plane] Marking the node m1 as control
-
plane by adding the labels: [node
-
role.kubernetes.io
/
control
-
plane node.kubernetes.io
/
exclude
-
from
-
external
-
load
-
balancers]
[mark
-
control
-
plane] Marking the node m1 as control
-
plane by adding the taints [node
-
role.kubernetes.io
/
control
-
plane:NoSchedule]
[bootstrap
-
token] Using token:
18g6j5
.k9a5tja9qn5ko7yw
[bootstrap
-
token] Configuring bootstrap tokens, cluster
-
info ConfigMap, RBAC Roles
[bootstrap
-
token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap
-
token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs
in
order
for
nodes to get
long
term certificate credentials
[bootstrap
-
token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs
from
a Node Bootstrap Token
[bootstrap
-
token] Configured RBAC rules to allow certificate rotation
for
all
node client certificates
in
the cluster
[bootstrap
-
token] Creating the
"cluster-info"
ConfigMap
in
the
"kube-public"
namespace
[kubelet
-
finalize] Updating
"/etc/kubernetes/kubelet.conf"
to point to a rotatable kubelet client certificate
and
key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube
-
proxy
Your Kubernetes control
-
plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir
-
p $HOME
/
.kube
sudo cp
-
i
/
etc
/
kubernetes
/
admin.conf $HOME
/
.kube
/
config
sudo chown $(
id
-
u):$(
id
-
g) $HOME
/
.kube
/
config
Alternatively,
if
you are the root user, you can run:
export KUBECONFIG
=
/
etc
/
kubernetes
/
admin.conf
You should now deploy a pod network to the cluster.
Run
"kubectl apply -f [podnetwork].yaml"
with one of the options listed at:
https:
/
/
kubernetes.io
/
docs
/
concepts
/
cluster
-
administration
/
addons
/
Then you can join
any
number of worker nodes by running the following on each as root:
kubeadm join
10.10
.
30.201
:
6443
-
-
token
18g6j5
.k9a5tja9qn5ko7yw \
-
-
discovery
-
token
-
ca
-
cert
-
hash
sha256:f7d9d6d7fb0f79600caee4adb2bb4ebecd543f71b533f22aadd4c88638417a63
|
这里可能会报错,报错信息:
1
2
|
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the
'criSocket'
field
in
the kubeadm configuration
file
: unix:
/
/
/
var
/
run
/
containerd
/
containerd.sock, unix:
/
/
/
var
/
run
/
cri
-
dockerd.sock
To see the stack trace of this error execute with
-
-
v
=
5
or
higher
|
原因:没有整合kubelet和cri-dockerd
解决办法:解决办法,在命令后面加上--cri-socket unix:///var/run/cri-dockerd.sock
执行下面的命令,给用户增加kubectl配置
,下面的命令也是 kubeadm init
输出的一部分:
1
2
3
|
mkdir
-
p $HOME
/
.kube
sudo cp
-
i
/
etc
/
kubernetes
/
admin.conf $HOME
/
.kube
/
config
sudo chown $(
id
-
u):$(
id
-
g) $HOME
/
.kube
/
config
|
或者,如果是 root
用户,则可以运行:
1
|
export KUBECONFIG
=
/
etc
/
kubernetes
/
admin.conf
|
ps:如果是单节点,执行解除,否则网络组件因为没有工作节点可能会失败
默认情况下,出于安全原因,集群不会在Master节点上调度 Pod。 如果希望能够在Master节点上调度Pod,运行
1
|
kubectl taint nodes
-
-
all
node
-
role.kubernetes.io
/
control
-
plane
-
|
这将从任何拥有 node-role.kubernetes.io/master taint
标记的节点中移除该标记, 包括控制平面节点,这意味着调度程序将能够在任何地方调度 Pods。
从上一步 kubeadm init
输出结果中获取以下信息,然后执行。
1
2
|
kubeadm join
10.10
.
30.201
:
6443
-
-
token
18g6j5
.k9a5tja9qn5ko7yw \
-
-
discovery
-
token
-
ca
-
cert
-
hash
sha256:f7d9d6d7fb0f79600caee4adb2bb4ebecd543f71b533f22aadd4c88638417a63
|
如果没有--token
,可以通过在控制节点上运行以下命令来获取令牌:
1
|
kubeadm token
list
|
默认情况下,令牌会在24小时后过期。如果要在当前令牌过期后将节点加入集群,则可以通过在控制节点上运行以下命令来创建新token:
1
|
kubeadm token create
|
如果没有 --discovery-token-ca-cert-hash
的值,则可以通过在控制节点上执行以下命令链来获取它:
1
2
|
openssl x509
-
pubkey
-
in
/
etc
/
kubernetes
/
pki
/
ca.crt | openssl rsa
-
pubin
-
outform der
2
>
/
dev
/
null | \
openssl dgst
-
sha256
-
hex
| sed
's/^.* //'
|
1
|
kubectl
apply
-
f https:
/
/
docs.projectcalico.org
/
manifests
/
calico.yaml
|
当以上操作全部完成之后,可执行
1
|
kubectl get pods
-
A
|
确认所有组件都是Running
状态。
然后输入:
1
|
kubectl get nodes
|
确认所有节点节点为Ready
。
Dashboard 是基于网页的 Kubernetes 用户界面。 可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。 可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源 (如 Deployment,Job,DaemonSet 等等)。
可以通过以下命令部署:
1
|
https:
/
/
raw.githubusercontent.com
/
kubernetes
/
dashboard
/
v2.
5.0
/
aio
/
deploy
/
recommended.yaml
|
but,国内很可能访问不了,可以换成kubebiz的源:
1
|
kubectl
apply
-
f https:
/
/
www.kubebiz.com
/
raw
/
KubeBiz
/
Kubernetes
%
20Dashboard
/
v2.
5.0
/
recommended.yaml
|
也可以自己创建recommended.yaml
,然后部署
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
|
apiVersion: v1
kind: Namespace
#创建命名空间
metadata:
name: kubernetes
-
dashboard
-
-
-
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s
-
app: kubernetes
-
dashboard
name: kubernetes
-
dashboard
namespace: kubernetes
-
dashboard
-
-
-
kind: Service
apiVersion: v1
metadata:
labels:
k8s
-
app: kubernetes
-
dashboard
name: kubernetes
-
dashboard
namespace: kubernetes
-
dashboard
spec:
ports:
-
port:
443
targetPort:
8443
selector:
k8s
-
app: kubernetes
-
dashboard
-
-
-
apiVersion: v1
kind: Secret
metadata:
labels:
k8s
-
app: kubernetes
-
dashboard
name: kubernetes
-
dashboard
-
certs
namespace: kubernetes
-
dashboard
type
: Opaque
-
-
-
apiVersion: v1
kind: Secret
metadata:
labels:
k8s
-
app: kubernetes
-
dashboard
name: kubernetes
-
dashboard
-
csrf
namespace: kubernetes
-
dashboard
type
: Opaque
data:
csrf: ""
-
-
-
apiVersion: v1
kind: Secret
metadata:
labels:
k8s
-
app: kubernetes
-
dashboard
name: kubernetes
-
dashboard
-
key
-
holder
namespace: kubernetes
-
dashboard
type
: Opaque
-
-
-
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s
-
app: kubernetes
-
dashboard
name: kubernetes
-
dashboard
-
settings
namespace: kubernetes
-
dashboard
-
-
-
kind: Role
apiVersion: rbac.authorization.k8s.io
/
v1
metadata:
labels:
k8s
-
app: kubernetes
-
dashboard
name: kubernetes
-
dashboard
namespace: kubernetes
-
dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
-
apiGroups: [""]
resources: [
"secrets"
]
resourceNames: [
"kubernetes-dashboard-key-holder"
,
"kubernetes-dashboard-certs"
,
"kubernetes-dashboard-csrf"
]
verbs: [
"get"
,
"update"
,
"delete"
]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
-
apiGroups: [""]
resources: [
"configmaps"
]
resourceNames: [
"kubernetes-dashboard-settings"
]
verbs: [
"get"
,
"update"
]
# Allow Dashboard to get metrics.
-
apiGroups: [""]
resources: [
"services"
]
resourceNames: [
"heapster"
,
"dashboard-metrics-scraper"
]
verbs: [
"proxy"
]
-
apiGroups: [""]
resources: [
"services/proxy"
]
resourceNames: [
"heapster"
,
"http:heapster:"
,
"https:heapster:"
,
"dashboard-metrics-scraper"
,
"http:dashboard-metrics-scraper"
]
verbs: [
"get"
]
-
-
-
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io
/
v1
metadata:
labels:
k8s
-
app: kubernetes
-
dashboard
name: kubernetes
-
dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
-
apiGroups: [
"metrics.k8s.io"
]
resources: [
"pods"
,
"nodes"
]
verbs: [
"get"
,
"list"
,
"watch"
]
-
-
-
apiVersion: rbac.authorization.k8s.io
/
v1
kind: RoleBinding
metadata:
labels:
k8s
-
app: kubernetes
-
dashboard
name: kubernetes
-
dashboard
namespace: kubernetes
-
dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes
-
dashboard
subjects:
-
kind: ServiceAccount
name: kubernetes
-
dashboard
namespace: kubernetes
-
dashboard
-
-
-
apiVersion: rbac.authorization.k8s.io
/
v1
kind: ClusterRoleBinding
metadata:
name: kubernetes
-
dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes
-
dashboard
subjects:
-
kind: ServiceAccount
name: kubernetes
-
dashboard
namespace: kubernetes
-
dashboard
-
-
-
kind: Deployment
apiVersion: apps
/
v1
metadata:
labels:
k8s
-
app: kubernetes
-
dashboard
name: kubernetes
-
dashboard
namespace: kubernetes
-
dashboard
spec:
replicas:
1
revisionHistoryLimit:
10
selector:
matchLabels:
k8s
-
app: kubernetes
-
dashboard
template:
metadata:
labels:
k8s
-
app: kubernetes
-
dashboard
spec:
securityContext:
seccompProfile:
type
: RuntimeDefault
containers:
-
name: kubernetes
-
dashboard
image: kubernetesui
/
dashboard:v2.
5.0
imagePullPolicy: Always
ports:
-
containerPort:
8443
protocol: TCP
args:
-
-
-
auto
-
generate
-
certificates
-
-
-
namespace
=
kubernetes
-
dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
-
name: kubernetes
-
dashboard
-
certs
mountPath:
/
certs
# Create on-disk volume to store exec logs
-
mountPath:
/
tmp
name: tmp
-
volume
livenessProbe:
httpGet:
scheme: HTTPS
path:
/
port:
8443
initialDelaySeconds:
30
timeoutSeconds:
30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser:
1001
runAsGroup:
2001
volumes:
-
name: kubernetes
-
dashboard
-
certs
secret:
secretName: kubernetes
-
dashboard
-
certs
-
name: tmp
-
volume
emptyDir: {}
serviceAccountName: kubernetes
-
dashboard
nodeSelector:
"kubernetes.io/os"
: linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
-
key: node
-
role.kubernetes.io
/
master
effect: NoSchedule
-
-
-
kind: Service
apiVersion: v1
metadata:
labels:
k8s
-
app: dashboard
-
metrics
-
scraper
name: dashboard
-
metrics
-
scraper
namespace: kubernetes
-
dashboard
spec:
ports:
-
port:
8000
targetPort:
8000
selector:
k8s
-
app: dashboard
-
metrics
-
scraper
-
-
-
kind: Deployment
apiVersion: apps
/
v1
metadata:
labels:
k8s
-
app: dashboard
-
metrics
-
scraper
name: dashboard
-
metrics
-
scraper
namespace: kubernetes
-
dashboard
spec:
replicas:
1
revisionHistoryLimit:
10
selector:
matchLabels:
k8s
-
app: dashboard
-
metrics
-
scraper
template:
metadata:
labels:
k8s
-
app: dashboard
-
metrics
-
scraper
spec:
securityContext:
seccompProfile:
type
: RuntimeDefault
containers:
-
name: dashboard
-
metrics
-
scraper
image: kubernetesui
/
metrics
-
scraper:v1.
0.7
ports:
-
containerPort:
8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path:
/
port:
8000
initialDelaySeconds:
30
timeoutSeconds:
30
volumeMounts:
-
mountPath:
/
tmp
name: tmp
-
volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser:
1001
runAsGroup:
2001
serviceAccountName: kubernetes
-
dashboard
nodeSelector:
"kubernetes.io/os"
: linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
-
key: node
-
role.kubernetes.io
/
master
effect: NoSchedule
volumes:
-
name: tmp
-
volume
emptyDir: {}
|
为了保护集群数据安全,默认情况下,Dashboard 会使用最少的 RBAC 配置进行部署。 当前,Dashboard 仅支持使用 Bearer
令牌登录,参考一下指南,创建用户来登录:
使用Kubernetes的Service Account
创建一个新用户,授予该用户admin权限,并使用与该用户绑定的Token来登录Dashboard。
重要提示:在继续之前,请确保你知道自己在做什么。给Dashboard的服务账户授予admin权限可能会有安全风险。
首先在命名空间 kubernetes-dashboard 中创建名称为 admin-user 的 Service Account。
1
2
3
4
5
|
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
-
user
namespace: kubernetes
-
dashboard
|
在大多数情况下,使用 kops、kubeadm 或任何其他流行工具部署集群后,集群中已经存在 ClusterRole cluster-admin。 可以使用它并仅为ServiceAccount 创建一个 ClusterRoleBinding。 如果它不存在,那么需要先创建此角色并手动授予所需的权限。
1
2
3
4
5
6
7
8
9
10
11
12
|
apiVersion: rbac.authorization.k8s.io
/
v1
kind: ClusterRoleBinding
metadata:
name: admin
-
user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster
-
admin
subjects:
-
kind: ServiceAccount
name: admin
-
user
namespace: kubernetes
-
dashboard
|
需要创建以上2个文件ServiceAccount
和ClusterRoleBinding
,可以把它们复制到新的清单文件中,命名为dashboard-adminuser.yaml
,并使用kubectl apply -f dashboard-adminuser.yaml
来创建它们。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
-
user
namespace: kubernetes
-
dashboard
-
-
-
apiVersion: rbac.authorization.k8s.io
/
v1
kind: ClusterRoleBinding
metadata:
name: admin
-
user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster
-
admin
subjects:
-
kind: ServiceAccount
name: admin
-
user
namespace: kubernetes
-
dashboard
|
1
|
kubectl
apply
-
f dashboard
-
adminuser.yaml
|
现在需要找到可以用来登录的token,执行以下命令:
1
|
kubectl
-
n kubernetes
-
dashboard create token admin
-
user
|
现在,就可以通过Token登录了。
移除 admin 的 ServiceAccount
和 ClusterRoleBinding
1
2
|
kubectl
-
n kubernetes
-
dashboard delete serviceaccount admin
-
user
kubectl
-
n kubernetes
-
dashboard delete clusterrolebinding admin
-
user
|
可以使用 kubectl 命令行工具来启用 Dashboard 访问,命令如下:
1
|
kubectl proxy
|
在本机浏览器输入:
1
|
http:
/
/
localhost:
8001
/
api
/
v1
/
namespaces
/
kubernetes
-
dashboard
/
services
/
https:kubernetes
-
dashboard:
/
proxy
/
|
ps:注意必须是 http
监听所有IP地址并将8080
转发至https 443
端口访问。
1
|
kubectl port
-
forward
-
n kubernetes
-
dashboard
-
-
address
0.0
.
0.0
service
/
kubernetes
-
dashboard
8080
:
443
|
不要中断进程,通过物理节点IP
加8080
,(注意必须是https)即可访问:
创建一个类型为NodePort
的SVC:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
apiVersion: v1
kind: Service
metadata:
name: kubernetes
-
dashboard
-
nodeport
namespace: kubernetes
-
dashboard
spec:
type
: NodePort
selector:
k8s
-
app: kubernetes
-
dashboard
sessionAffinity:
None
ports:
-
nodePort:
30443
protocol: TCP
port:
8443
targetPort:
8443
|
创建成功后,通过物理节点IP
加30443
,(注意必须是https)即可访问:
所有的节点都需要安装nfs工具
1
2
3
4
5
6
|
yum
-
y install nfs
-
utils
#rpcbind服务已包含在nfs-utils中了,所已不需要单独安装了
systemctl start rpcbind&&\
systemctl start nfs
-
server&&\
systemctl enable rpcbind&&\
systemctl enable nfs
-
server
#启动并设置开机自启,(注意要先启动rpcbind服务)
|
在部署NFS服务器时已配置好了共享文件夹,可在节点上使用showmount
查看一下可共享的文件夹有哪些
1
|
showmount
-
e nfs
# 或 showmount -e 10.10.30.211
|
Kubernetes 不包含内部 NFS 驱动。所以需要使用外部驱动和创建StorageClass
官方默认的:
1
|
git clone https:
/
/
github.com
/
kubernetes
-
sigs
/
nfs
-
subdir
-
external
-
provisioner
|
国内镜像:
1
|
wget
-
O
all
.yaml https:
/
/
www.kubebiz.com
/
raw
/
KubeBiz
/
nfs
-
client
-
provisioner
/
latest
/
all
|
创建nfs-all.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
|
apiVersion: apps
/
v1
kind: Deployment
metadata:
name: nfs
-
client
-
provisioner
namespace: default
labels:
app: nfs
-
client
-
provisioner
spec:
replicas:
1
strategy:
type
: Recreate
selector:
matchLabels:
app: nfs
-
client
-
provisioner
template:
metadata:
labels:
app: nfs
-
client
-
provisioner
spec:
serviceAccountName: nfs
-
client
-
provisioner
containers:
-
name: nfs
-
client
-
provisioner
image: kubebiz
/
nfs
-
subdir
-
external
-
provisioner:v4.
0.2
volumeMounts:
-
name: nfs
-
client
-
root
mountPath:
/
persistentvolumes
env:
-
name: PROVISIONER_NAME
value: k8s
-
sigs.io
/
nfs
-
subdir
-
external
-
provisioner
-
name: NFS_SERVER
value:
10.10
.
30.211
-
name: NFS_PATH
value:
/
k8s
volumes:
-
name: nfs
-
client
-
root
nfs:
server:
10.10
.
30.211
path:
/
k8s
-
-
-
apiVersion: storage.k8s.io
/
v1
kind: StorageClass
metadata:
name: nfs
-
storage
annotations:
storageclass.kubernetes.io
/
is
-
default
-
class
:
"true"
provisioner: k8s
-
sigs.io
/
nfs
-
subdir
-
external
-
provisioner
allowVolumeExpansion: true
parameters:
archiveOnDelete:
"false"
-
-
-
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs
-
client
-
provisioner
namespace: default
-
-
-
apiVersion: rbac.authorization.k8s.io
/
v1
kind: ClusterRole
metadata:
name: nfs
-
client
-
provisioner
-
runner
rules:
-
apiGroups: [""]
resources: [
"nodes"
]
verbs: [
"get"
,
"list"
,
"watch"
]
-
apiGroups: [""]
resources: [
"persistentvolumes"
]
verbs: [
"get"
,
"list"
,
"watch"
,
"create"
,
"delete"
]
-
apiGroups: [""]
resources: [
"persistentvolumeclaims"
]
verbs: [
"get"
,
"list"
,
"watch"
,
"update"
]
-
apiGroups: [
"storage.k8s.io"
]
resources: [
"storageclasses"
]
verbs: [
"get"
,
"list"
,
"watch"
]
-
apiGroups: [""]
resources: [
"events"
]
verbs: [
"create"
,
"update"
,
"patch"
]
-
-
-
apiVersion: rbac.authorization.k8s.io
/
v1
kind: ClusterRoleBinding
metadata:
name: run
-
nfs
-
client
-
provisioner
subjects:
-
kind: ServiceAccount
name: nfs
-
client
-
provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs
-
client
-
provisioner
-
runner
apiGroup: rbac.authorization.k8s.io
-
-
-
kind: Role
apiVersion: rbac.authorization.k8s.io
/
v1
metadata:
name: leader
-
locking
-
nfs
-
client
-
provisioner
namespace: default
rules:
-
apiGroups: [""]
resources: [
"endpoints"
]
verbs: [
"get"
,
"list"
,
"watch"
,
"create"
,
"update"
,
"patch"
]
-
-
-
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io
/
v1
metadata:
name: leader
-
locking
-
nfs
-
client
-
provisioner
namespace: default
subjects:
-
kind: ServiceAccount
name: nfs
-
client
-
provisioner
namespace: default
roleRef:
kind: Role
name: leader
-
locking
-
nfs
-
client
-
provisioner
apiGroup: rbac.authorization.k8s.io
|
将默认的10.3.243.101
和/data/nfs
改成自己的nfs地址和存储路径,这里我已经改成了10.10.30.211
和/k8s
执行yaml
1
|
kubectl
apply
-
f nfs
-
all
.yaml
|
或官方的:
1
|
kubectl
apply
-
f deploy
/
objects
/
.
|
1
|
kubectl get pods | grep nfs
-
client
-
provisioner
|
输出
1
|
nfs
-
client
-
provisioner
-
66db4f7c
-
9kmfn
0
/
1
ContainerCreating
0
3m20s
|
如果不是Running
,通过logs获取详细报错原因:
1
|
kubectl logs
-
f nfs
-
client
-
provisioner
-
66db4f7c
-
9kmfn
|
获取storageClassName
的名称:
1
|
kubectl get sc
|
返回:
1
2
|
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs
-
storage (default) k8s
-
sigs.io
/
nfs
-
subdir
-
external
-
provisioner Delete Immediate true
35m
|
sc的名称是nfs-storage
。
更多【学习Kubernetes笔记——kubeadm安装Kubernetes】相关视频教程:www.yxfzedu.com