前言
在前面的文章优雅的使用Prometheus Operator中我们搭建好了基础的监控平台和图表.
现在我们可以将etcd也纳入监控范围内了。
监控ETCD
etcd.service.yaml
由于我们的etcd是自行跑在kubernetes外部的,想要监控到,我们需要手动的创建Endpoints
以及Service
。在下面的Endpoints
中我们填写上我们的etcd的ip地址
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
|
#etcd.service.yaml
apiVersion: v1
kind: Service
metadata:
name: etcd-k8s
namespace: kube-system
labels:
k8s-app: etcd
spec:
type: ClusterIP
clusterIP: None
ports:
- name: port
port: 2379
protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
name: etcd-k8s
namespace: kube-system
labels:
k8s-app: etcd
subsets:
- addresses:
- ip: 192.168.0.1
nodeName: etc-master1
- ip: 192.168.1.1
nodeName: etc-master2
- ip: 192.168.2.1
nodeName: etc-master3
ports:
- name: port
port: 2379
protocol: TCP
|
创建endpoint和service
1
|
kubectl apply -f etcd.service.yaml
|
查看endpoints
1
2
|
$ kubectl get ep -nkube-system | grep etcd
etcd-k8s 192.168.0.1:2379,192.168.1.1:2379,192.168.2.1:2379 21h
|
查看service
1
2
|
$ kubectl get -nkube-system svc | grep etcd
etcd-k8s ClusterIP None <none> 2379/TCP 21h
|
etcd.monitoring.yaml
编写监控的yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
|
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: etcd-k8s
namespace: monitoring
labels:
k8s-app: etcd-k8s
spec:
jobLabel: k8s-app
endpoints:
- port: port
interval: 15s
scheme: http
selector:
matchLabels:
k8s-app: etcd
namespaceSelector:
matchNames:
- kube-system
|
部署监控
1
|
kubectl apply -f etcd.monitoring.yaml
|
查看
1
2
|
$ kubectl get ServiceMonitor -nmonitoring | grep etcd
etcd-k8s 21h
|
- 然后配置grafana的仪表盘,选择
import
填入id10322
,然后load即可,然后选择ETCD的dashboard。