Kubernetes搭建ElasticSearch集群

继MySQL集群,Redis集群后,我们在Kubernetes上来部署ElasticSearch集群。

详细过程如下。

先创建几个配置文件。

PersistentVolume

还是使用NFS,具体配置过程就不写了,见下面的配置文件。

1-pv.yaml,我们需要3个ElasticSearch节点,所以创建3个PersistentVolume。

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-es-pv1
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: rhonin-vm1
    path: /data/es/pv1

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-es-pv2
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: rhonin-vm1
    path: /data/es/pv2

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-es-pv3
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: rhonin-vm1
    path: /data/es/pv3

命名空间

新建一个名为elasticseach的命名空间

2-namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: elasticsearch

定义服务

3-services.yaml

kind: Service
apiVersion: v1
metadata:
  name: elasticsearch
  namespace: elasticsearch
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None
  ports:
    - port: 9200
      name: rest
    - port: 9300
      name: inter-node

有状态集合

4-stateful-set.yaml,这是最关键的配置。

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es7-cluster
  namespace: elasticsearch
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
        - name: elasticsearch
          image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
          imagePullPolicy: IfNotPresent
          resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
          ports:
            - containerPort: 9200
              name: rest
              protocol: TCP
            - containerPort: 9300
              name: inter-node
              protocol: TCP
          volumeMounts:
            - name: data
              mountPath: /usr/share/elasticsearch/data
          env:
            - name: cluster.name
              value: es7-cluster
            - name: node.name
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: discovery.zen.minimum_master_nodes # 含义请参阅官方 Elasticsearch 文档
              value: "2"
            - name: discovery.seed_hosts # 含义请参阅官方 Elasticsearch 文档
              value: "es7-cluster-0.elasticsearch,es7-cluster-1.elasticsearch,es7-cluster-2.elasticsearch"
            - name: cluster.initial_master_nodes # 初始化的 master 节点,旧版本相关配置 discovery.zen.minimum_master_nodes
              value: "es7-cluster-0,es7-cluster-1,es7-cluster-2" # 含义请参阅官方 Elasticsearch 文档
            - name: ES_JAVA_OPTS
              value: "-Xms2g -Xmx2g" # 根据具体资源及需求调整
      initContainers:
        - name: fix-permissions
          image: busybox
          imagePullPolicy: IfNotPresent
          command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
          securityContext:
            privileged: true
          volumeMounts:
            - name: data
              mountPath: /usr/share/elasticsearch/data
        - name: increase-vm-max-map
          image: busybox
          imagePullPolicy: IfNotPresent
          command: ["sysctl", "-w", "vm.max_map_count=262144"]
          securityContext:
            privileged: true
        - name: increase-fd-ulimit
          image: busybox
          imagePullPolicy: IfNotPresent
          command: ["sh", "-c", "ulimit -n 65536"]
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi

需要注意的是ES_JAVA_OPTS的值,最大最小内存的值要一致,比如-Xms2g -Xmx2g,同时宿主机至少要有4G内存,8G以上最佳,不然集群即使启动起来也会挂掉并无限重启。

部署

依次执行以上4个配置

kubectl apply -f 1-pv.yaml
kubectl apply -f 2-namespace.yaml
kubectl apply -f 3-services.yaml
kubectl apply -f 4-stateful-set.yaml

验证连接

在Kubernetes内部,别的Pod可以使用Service名称elasticsearch.elasticsearch连接ElasticSearch集群,端口号9200。

作为临时测试,我们使用kubectl的端口转发把9200端口暴露出来

kubectl port-forward --namespace=elasticsearch --address 0.0.0.0 service/elasticsearch 9200:9200 

然后访问http://localhost:9200/_cluster/health?pretty就能看到集群信息了。

Leave a Comment

豫ICP备19001387号-1