像elasticsearch这种应用属于重度依赖IO,依赖磁盘,对磁盘敏感的应用,如果采用分布式文件系统(NFS/GlusterFS等)来进行数据的存储,对ES的性能会造成很大的影响。 elasticsearch是具有状态的应用,我们可以采用两种方式部署,STS+LocalPV 或者是Deployment+NodeSelector+HostPath 的方式进行部署,本质上来说两者区别不大。

ES的三种角色

elasticsearch的集群主要是由三种角色构成:

  1. Master节点(元数据节点) 主要负责集群操作相关的内容,如创建或删除索引,跟踪哪些节点是群集的一部分,并决定哪些分片分配给相关的节点。
1
node.master: true
  1. DataNode 数据存储节点 主要是存储索引数据的节点,主要对文档进行增删改查操作,聚合操作等等,即我们所有的数据都存储在这些节点中。
1
node.data: true

1.Coordinate node 协调节点 协调节点只作为接收请求、转发请求到其他节点、汇总各个节点返回数据等功能的节点。

1
2
node.data: false
node.master: false
  • 集群中的每一个节点,都可以但仍一个到多个角色,比如可以既做master也做node节点。为了演示方便这里我们部署的集群为该模式的集群,master和data使用同一个节点。

Deployment+NodeSelector+HostPath方式部署

我们这里选择三个kubernetes的节点来部署

  • node1(部署es-m1包含数据节点角色)
  • node2(部署es-m2包含数据节点角色)
  • node3(部署es-m3包含数据节点角色) node操作(一下下操作需要在每一个k8snode机器上执行)
  1. 调整max_map_count大小,不然会导致es无法启动,需要在每台机器都执行
1
echo 'vm.max_map_count=262144' >> /etc/sysctl.conf && sysctl -p
  1. 创建es存储数据目录,并授权。
1
2
3
4
# 创建es-data数据目录
mkdir /es-data
# es的dockerfile中使用user:1000 这里我们授权给1000,不然会因为权限不足导致es无法启动。
chown -R 1000:1000 /es-data

你可以从es提供的dockerfile了解到更多

es-m1.yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
# 由于我们不需要负载均衡的功能所以我们不需要使用
# 定义一个Headless Service,这是因为我们不需要VIP LB的功能,所以直接用无头服务就好了。通过es-m1.default.svc.cluster.local即可访问
apiVersion: v1
kind: Service
metadata:
  name: es-m1
  labels:
    app: es-m1
spec:
  # 不需要VIP
  clusterIP: None
  ports:
    - port: 9300
      protocol: TCP
      name: port-9300
    - port: 9200
      protocol: TCP
      name: port-9200
  selector:
    app: es-m1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: es-m1
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: es-m1
  template:
    metadata:
      labels:
        app: es-m1
    spec:
      # 使用node Selector
      nodeName: node1
      containers:
        - name: es-m1
          image: elasticsearch:7.2.0
          imagePullPolicy: Always
          livenessProbe:
            httpGet:
              port: 9200
            periodSeconds: 10
          ports:
            - containerPort: 9200
              protocol: TCP
            - containerPort: 9300
              protocol: TCP
          env:
            - name: ES_JAVA_OPTS
              value: "-Xmx1g -Xms1g"
            - name: node.master
              value: "true"
            - name: node.data
              value: "true"
              # 定义节点名
            - name: node.name
              value: "es-m1"
            - name: cluster.name
              value: "efk-cluster"
            - name: discovery.seed_hosts
              value: "es-m1,es-m2,es-m3"
              # 定义master节点,由于我们都在default namespace中,所以直接用简写即可
            - name: cluster.initial_master_nodes
              value: "es-m1,es-m2,es-m3"
          volumeMounts:
            - mountPath: /usr/share/elasticsearch/data
              name: es-data
      # 采用hostpath方式挂载
      volumes:
        - name: es-data
          hostPath:
            path: /es-data/

es-m2.yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
# 定义一个Headless Service,这是因为我们不需要VIP LB的功能,所以直接用无头服务就好了。通过es-m2.default.svc.cluster.local即可访问
apiVersion: v1
kind: Service
metadata:
  name: es-m2
  labels:
    app: es-m2
spec:
  clusterIP: None
  ports:
    - port: 9300
      protocol: TCP
      name: port-9300
    - port: 9200
      protocol: TCP
      name: port-9200
  selector:
    app: es-m2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: es-m2
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: es-m2
  template:
    metadata:
      labels:
        app: es-m2
    spec:
      nodeName: node2
      containers:
        - name: es-m2
          image: elasticsearch:7.2.0
          imagePullPolicy: Always
          livenessProbe:
            httpGet:
              port: 9200
            periodSeconds: 10
          ports:
            - containerPort: 9200
              protocol: TCP
            - containerPort: 9300
              protocol: TCP
          env:
            - name: ES_JAVA_OPTS
              value: "-Xmx1g -Xms1g"
            - name: node.master
              value: "true"
            - name: node.data
              value: "true"
            - name: node.name
              value: "es-m2"
            - name: cluster.name
              value: "efk-cluster"
            - name: discovery.seed_hosts
              value: "es-m1,es-m2,es-m3"
            - name: cluster.initial_master_nodes
              value: "es-m1,es-m2,es-m3"
          volumeMounts:
            - mountPath: /usr/share/elasticsearch/data
              name: es-data
      volumes:
        - name: es-data
          hostPath:
            path: /es-data

es-m3.yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
# 定义一个Headless Service,这是因为我们不需要VIP LB的功能,所以直接用无头服务就好了。通过es-m3.default.svc.cluster.local即可访问
apiVersion: v1
kind: Service
metadata:
  name: es-m3
  labels:
    app: es-m3
spec:
  clusterIP: None
  ports:
    - port: 9300
      protocol: TCP
      name: port-9300
    - port: 9200
      protocol: TCP
      name: port-9200
  selector:
    app: es-m3
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: es-m3
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: es-m3
  template:
    metadata:
      labels:
        app: es-m3
    spec:
      nodeName: node3
      containers:
        - name: es-m3
          image: elasticsearch:7.2.0
          imagePullPolicy: Always
          livenessProbe:
            httpGet:
              port: 9200
            periodSeconds: 10
          ports:
            - containerPort: 9200
              protocol: TCP
            - containerPort: 9300
              protocol: TCP
          env:
            - name: ES_JAVA_OPTS
              value: "-Xmx1g -Xms1g"
            - name: node.master
              value: "true"
            - name: node.data
              value: "true"
            - name: node.name
              value: "es-m3"
            - name: cluster.name
              value: "efk-cluster"
            - name: discovery.seed_hosts
              value: "es-m1,es-m2,es-m3"
            - name: cluster.initial_master_nodes
              value: "es-m1,es-m2,es-m3"
          volumeMounts:
            - mountPath: /usr/share/elasticsearch/data
              name: es-data
      volumes:
        - name: es-data
          hostPath:
            path: /es-data

将以上三个yaml文件放置到一个目录中

.
├── es
│   ├── es-m1.yaml
│   ├── es-m2.yaml
│   └── es-m3.yaml

部署

1
kubectl apply -f es/

查看是否成功启动

1
2
3
4
$ kubectl  get pods | grep es-m
es-m1-7579f45f6b-bkpct                     1/1     Running   0          111s
es-m2-5756645d47-kt8lq                     1/1     Running   0          111s
es-m3-55bc786547-trd2d                     1/1     Running   0          111s

如果启动发现状态是Error请用kubectl logs -f deploy/es-mx x代表1-3,查看日志进行相应排错。

安装elastichd集群可视化工具

elasticsearch-HQ

es-hq.yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
apiVersion: v1
kind: Service
metadata:
  name: eshq
  labels:
    app: eshq
spec:
  ports:
    - port: 5000
      protocol: TCP
      name: ts1
  selector:
    app: eshq
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: eshq
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: eshq
  template:
    metadata:
      labels:
        app: eshq
    spec:
      containers:
        - name: eshq
          image: elastichq/elasticsearch-hq
          imagePullPolicy: Always
          livenessProbe:
            httpGet:
              port: 5000
            periodSeconds: 10
          resources:
            limits:
              memory: 3Gi
          ports:
            - containerPort: 5000
              protocol: TCP
---

ingress

eshq-ingress.yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: esd-ingress
  annotations:
    # htpasswd -c auth root
    #kubectl -n default create secret generic basic-auth --from-file=auth
    #4dbdeb4cffa8b79703f169d832d199d4
    ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: basic-auth
    nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - root"
spec:
  tls:
    - hosts:
        - eshq.xxx.com
      secretName: xxx-certs
  rules:
    - host: eshq.xxx.com
      http:
        paths:
          - backend:
              serviceName: eshq
              servicePort: 5000