Kubernetes Cloud Native 实践 ( 四 ) 中间件上云

kyaa111 10月前 ⋅ 249 阅读

全文目录

Kubernetes Cloud Native 实践 ( 一 ) 安装

Kubernetes Cloud Native 实践 ( 二 ) 简单使用

Kubernetes Cloud Native 实践 ( 三 ) NFS/PV/PVC

Kubernetes Cloud Native 实践 ( 四 ) 中间件上云

Kubernetes Cloud Native 实践 ( 五 ) 应用上云

Kubernetes Cloud Native 实践 ( 六 ) 集成ELK日志平台

Kubernetes Cloud Native 实践 ( 七 ) 应用监控

Kubernetes Cloud Native 实践 ( 八 ) CICD集成

Kubernetes Cloud Native 实践 ( 九 ) 运维管理

Kubernetes Cloud Native 实践 ( 十 ) 相关问题

Kubernetes Cloud Native 实践 ( 十一 ) 运行截图

  1. 项目的全部源码: https://github.com/MQPearth/spring-boot-backend
  2. 中间件上云
    1. mysql (不推荐): 当容器化部署mysql时, 磁盘和网络会成为性能瓶颈, 建议独立集群部署做高可用 (单表百万级以下时,非容器和容器的性能差异并不多。单表千万级时,容器 MySQL 大概会损耗 10% ~ 20%的性能 link: https://mp.weixin.qq.com/s?__biz=MzIxNTE4MjM4MA==&mid=2247484239&idx=1&sn=015294b6abb14e85213dec92276fb0b0&chksm=979d7e0ca0eaf71a9ec728f7773594f3dbd7cc31fd8b2bef80af14430d00521278773f67eaa2&from=industrynews&version=4.1.9.6042&platform=win#rd)
    2. nacos:
      1. 集群配置
      2. 应用配置文件, 修改配置. kubectl create -f xxx.yaml
      3. 对nacos进行扩容缩容, kubectl exec nacos-0 cat conf/cluster.conf -n nacos观察输出, 是否对应节点数量
      4. kubectl create ingress nacos -n nacos --class=nginx --rule="nacos.k8s.com/*=nacos-headless:8848"
      5. 浏览器打开nacos.k8s.com
      6. 应用注册到nacos: 将原来的url改成nacos-headless.nacos即可(服务名+命名空间)
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"ServiceAccount","metadata":{"name":"nfs-client-provisioner","namespace":"nacos"}}
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nacos-cm
  namespace: nacos
data:
  mysql.host: "10.11.38.190"
  mysql.db.name: "nacos"
  mysql.port: "3307"
  mysql.user: "root"
  mysql.password: "123456"
  nacos.core.auth.server.identity.key: "NzAwN2UwZTMyYWUwNDNiOGFhNTY4NzFhZjI2OTE4YmM="
  nacos.core.auth.server.identity.value: "ZDhjZjNhMThkNjA3NGFkN2JlZTQxMzQyNGJlYzUyOTI="
  nacos.core.auth.plugin.nacos.token.secret.key: "Y2NhMmU0NDZhMDc4NDQ4NGExYTQ2MjQ1YjRlMGYxMWQ="
---
apiVersion: v1
kind: Service
metadata:
  name: nacos-headless
  namespace: nacos
  labels:
    app: nacos
spec:
  publishNotReadyAddresses: true 
  ports:
    - port: 8848
      name: server
      targetPort: 8848
    - port: 9848
      name: client-rpc
      targetPort: 9848
    - port: 9849
      name: raft-rpc
      targetPort: 9849
    ## 兼容1.4.x版本的选举端口
    - port: 7848
      name: old-raft-rpc
      targetPort: 7848
  clusterIP: None
  selector:
    app: nacos
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nacos
  namespace: nacos
spec:
  podManagementPolicy: Parallel
  serviceName: nacos-headless
  replicas: 2
  template:
    metadata:
      labels:
        app: nacos
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                      - nacos
              topologyKey: "kubernetes.io/hostname"
      serviceAccountName: nfs-client-provisioner
      initContainers:
            - name: NACOS_REPLICAS
              value: "3"
            - name: SERVICE_NAME
              value: "nacos-headless"
            - name: DOMAIN_NAME
              value: "cluster.local"
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: MYSQL_SERVICE_HOST
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.host
            - name: MYSQL_SERVICE_DB_NAME
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.db.name
            - name: MYSQL_SERVICE_PORT
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.port
            - name: MYSQL_SERVICE_USER
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.user
            - name: MYSQL_SERVICE_PASSWORD
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.password
                  
            - name: SPRING_DATASOURCE_PLATFORM
              value: "mysql"
            - name: NACOS_SERVER_PORT
              value: "8848"
            - name: NACOS_APPLICATION_PORT
              value: "8848"
            - name: PREFER_HOST_MODE
              value: "hostname"
            - name: NACOS_AUTH_ENABLE
              value: "true"
            - name: NACOS_CORE_AUTH_ENABLE
              value: "true"
            - name: NACOS_AUTH_TOKEN_EXPIRE_SECONDS
              value: "180000"
            - name: NACOS_CORE_AUTH_SERVER_IDENTITY_KEY
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: nacos.core.auth.server.identity.key
            - name: NACOS_CORE_AUTH_SERVER_IDENTITY_VALUE
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: nacos.core.auth.server.identity.value
            - name: NACOS_CORE_AUTH_PLUGIN_NACOS_TOKEN_SECRET_KEY
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: nacos.core.auth.plugin.nacos.token.secret.key
          volumeMounts:
            - name: data
              mountPath: /home/nacos/plugins/peer-finder
              subPath: peer-finder
            - name: data
              mountPath: /home/nacos/data
              subPath: data
            - name: data
              mountPath: /home/nacos/logs
              subPath: logs
  volumeClaimTemplates:
    - metadata:
        name: data
        annotations:
          volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
      spec:
        accessModes: [ "ReadWriteMany" ]
        resources:
          requests:
            storage: 1Gi
  selector:
    matchLabels:
      app: nacos
  1. redis: 三主三从
    1. kubectl create -f xxx.yaml
    2. kubectl exec -n redis -it redis-0 -- redis-cli -a 123456 --cluster create --cluster-replicas 1 $(kubectl get pods -n redis -l app=redis -o jsonpath='{range.items[*]}{.status.podIP}:6379 {end}')
    3. 其实就等同于这个命令: kubectl exec -n redis -it redis-0 -- redis-cli -a 123456 --cluster create --cluster-replicas 1 10.244.140.246:6379 10.244.140.65:6379 10.244.140.247:6379 10.244.140.87:6379 10.244.140.248:6379 10.244.140.89:6379
    4. 验证集群状态
    5. kubectl exec -n redis -it redis-0 -- redis-cli -a 123456 --cluster check --cluster-replicas 1 $(kubectl get pods -n redis -l app=redis -o jsonpath='{range.items[*]}{.status.podIP} 6379 {end}')
    6. 验证集群自愈能力, 进入某一容器, 执行redis-cli -a 123456 --cluster check localhost:6379
    7. 随便删除一个pod, 等待k8s重新创建该pod后, 再执行redis-cli -a 123456 --cluster check localhost:6379发现已经重新注册到集群了
    8. 这种能力依赖于nodes.conf通过pvc挂载到外部存储, pod挂掉后, 数据并没有被删除. 如果在pod被删除的同时, 清空nodes.conf, 则pod重启后, 集群并不会恢复
    9. 重启k8s的两个工作节点, 所有pod的ip将会重新分配, 查看集群状态, 因为创建时使用了pod的domain, 所以集群仍然正常
    10. 往里新增key, 主从同步也是正常的. redis-cli -a 123456 -c --cluster call redis-0.redis.redis.svc.cluster.local:6379 keys \*
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"redis","app.kubernetes.io/name":"redis"},"name":"redis"}}
  labels:
    app.kubernetes.io/instance: redis
    app.kubernetes.io/name: redis
  name: redis
---
apiVersion: v1
    databases 16
    always-show-logo yes
    save 900 1
    save 300 10
    save 60 10000
    stop-writes-on-bgsave-error yes
    rdbcompression yes
    rdbchecksum yes
    dbfilename dump.rdb
    dir /data
    replica-serve-stale-data yes
    replica-read-only yes
    repl-diskless-sync no
    repl-diskless-sync-delay 5
    repl-disable-tcp-nodelay no
    replica-priority 100
    lazyfree-lazy-eviction no
    lazyfree-lazy-expire no
    lazyfree-lazy-server-del no
    replica-lazy-flush no
    appendonly no
    appendfilename "appendonly.aof"
    appendfsync everysec
    no-appendfsync-on-rewrite no
    auto-aof-rewrite-percentage 100
    auto-aof-rewrite-min-size 64mb
    aof-load-truncated yes
    aof-use-rdb-preamble yes
    lua-time-limit 5000
    cluster-enabled yes
    cluster-config-file nodes.conf
    cluster-node-timeout 15000
    slowlog-log-slower-than 10000
    slowlog-max-len 128
    latency-monitor-threshold 0
    notify-keyspace-events ""
    hash-max-ziplist-entries 512
    hash-max-ziplist-value 64
    list-max-ziplist-size -2
    list-compress-depth 0
    set-max-intset-entries 512
    zset-max-ziplist-entries 128
    zset-max-ziplist-value 64
    hll-sparse-max-bytes 3000
    stream-node-max-bytes 4096
    stream-node-max-entries 100
    activerehashing yes
    client-output-buffer-limit normal 0 0 0
    client-output-buffer-limit replica 256mb 64mb 60
    client-output-buffer-limit pubsub 32mb 8mb 60
    hz 10
    dynamic-hz yes
    aof-rewrite-incremental-fsync yes
    rdb-save-incremental-fsync yes
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: redis
  name: redis
  namespace: redis
spec:
  ports:
  - port: 6379
    protocol: TCP
    targetPort: 6379
  selector:
    app: redis
  type: ClusterIP
  clusterIP: None
---
apiVersion: apps/v1
  labels:
    app: redis
  name: redis
  namespace: redis
spec:
  selector:
    matchLabels:
      app: redis
  replicas: 6
  serviceName: redis
  template:                     
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:6.0.19
        imagePullPolicy: IfNotPresent
        command: 
        - "redis-server"
        args:
        - "/etc/redis/redis.conf"
        - "--cluster-announce-ip"
        - "$(POD_NAME).$(POD_SERVICE_NAME).$(POD_NAMESPACE).svc.cluster.local"
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_SERVICE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceName
        ports:
        - name: redis-6379
          containerPort: 6379
        volumeMounts:
        - name: "redis-conf"
          mountPath: "/etc/redis"
        - name: "redis-data"
          mountPath: "/data"
        - name: localtime
          mountPath: /etc/localtime
          readOnly: true
      restartPolicy: Always
      volumes:
      - name: "redis-conf"
        configMap:
          name: "redis-cm"
          items:
            - key: "redis.conf"
              path: "redis.conf"
      - name: localtime
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai
          type: File
  volumeClaimTemplates:
    - metadata:
        name: "redis-data"
        annotations:
          volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
      spec:
        accessModes: [ "ReadWriteMany" ]
        resources: 
          requests:
            storage: 100M
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 10.244.140.248:6379 to 10.244.140.246:6379
Adding replica 10.244.140.89:6379 to 10.244.140.65:6379
Adding replica 10.244.140.87:6379 to 10.244.140.247:6379
M: bd56f4d44933a5631b033f00c83694444e9b2d51 10.244.140.246:6379
   slots:[0-5460] (5461 slots) master
M: f1ca6281b160f5b3dd0ddaa4fe55b62f7a985fa7 10.244.140.65:6379
   slots:[5461-10922] (5462 slots) master
M: 63713ea0264fe6a646a872b423121a435e814888 10.244.140.247:6379
   slots:[10923-16383] (5461 slots) master
S: 883c295d225da18471c2d87b298b19893ce2313a 10.244.140.87:6379
   replicates 63713ea0264fe6a646a872b423121a435e814888
S: c88da2594b499f7f0773d957bd80d196effcfae3 10.244.140.248:6379
   replicates bd56f4d44933a5631b033f00c83694444e9b2d51
S: f0db516991b5db3e65f3de943be57d230af0aa9b 10.244.140.89:6379
   replicates f1ca6281b160f5b3dd0ddaa4fe55b62f7a985fa7
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..
>>> Performing Cluster Check (using node 10.244.140.246:6379)
M: bd56f4d44933a5631b033f00c83694444e9b2d51 10.244.140.246:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: c88da2594b499f7f0773d957bd80d196effcfae3 10.244.140.248:6379
   slots: (0 slots) slave
   replicates bd56f4d44933a5631b033f00c83694444e9b2d51
M: f1ca6281b160f5b3dd0ddaa4fe55b62f7a985fa7 10.244.140.65:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: f0db516991b5db3e65f3de943be57d230af0aa9b 10.244.140.89:6379
   slots: (0 slots) slave
   replicates f1ca6281b160f5b3dd0ddaa4fe55b62f7a985fa7
M: 63713ea0264fe6a646a872b423121a435e814888 10.244.140.247:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 883c295d225da18471c2d87b298b19893ce2313a 10.244.140.87:6379
   slots: (0 slots) slave
   replicates 63713ea0264fe6a646a872b423121a435e814888
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
  1. skywalking:
    1. kubectl create -f skywalking-oap.yaml
    2. kubectl create -f skywalking-ui.yaml
    3. kubectl create ingress skywalking-ui -n skywalking-ui --class=nginx --rule="skywalking-ui.k8s.com/*=skywalking-ui:8080"
    4. 浏览器访问: skywalking-ui.k8s.com
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"skywalking-oap","app.kubernetes.io/name":"skywalking-oap"},"name":"skywalking-oap"}}
  labels:
    app.kubernetes.io/instance: skywalking-oap
    app.kubernetes.io/name: skywalking-oap
  name: skywalking-oap
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: skywalking-oap
  name: skywalking-oap
  namespace: skywalking-oap
spec:
  ports:
  - port: 11800
    name: "11800"
    nodePort: 30081    
    protocol: TCP
    targetPort: 11800
  - port: 12800
    name: "12800"
    nodePort: 30082
    protocol: TCP
    targetPort: 12800
  selector:
    app: skywalking-oap
  type: NodePort
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: skywalking-oap
  namespace: skywalking-oap
  labels: 
    app: skywalking-oap
spec:
  replicas: 2
  selector:
    matchLabels:
      app: skywalking-oap
  template:
    metadata:
      labels: 
        app: skywalking-oap
    spec:
      containers:
      - name: skywalking-oap
        image: apache/skywalking-oap-server:9.4.0-java17
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 11800
          name: "tcp-11800"
        - containerPort: 12800
          name: "tcp-12800"
        env:
        - name: TZ
          value: "Asia/Shanghai"
        - name: SW_STORAGE
          value: "elasticsearch"
        - name: SW_STORAGE_ES_CLUSTER_NODES
          value: "10.11.38.190:9200"
        - name: SW_CLUSTER
          value: "nacos"
        - name: SW_CLUSTER_NACOS_HOST_PORT
          value: "nacos-headless.nacos:8848"
        - name: SW_CLUSTER_NACOS_NAMESPACE
          value: "19a0fa32-ed2e-40f1-a1e1-aae8c81d8cf8"
        - name: SW_CLUSTER_NACOS_USERNAME
          value: "nacos"
        - name: SW_CLUSTER_NACOS_PASSWORD
          value: "nacos"
        - name: SW_CLUSTER_INTERNAL_COM_HOST
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: SW_CLUSTER_INTERNAL_COM_PORT
          value: "11800"
        volumeMounts:
        - name: localtime
          mountPath: /etc/localtime
          readOnly: true
      volumes:
      - name: localtime
        hostPath:
          path: /etc/localtime

apiVersion: v1
kind: Namespace
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"skywalking-ui","app.kubernetes.io/name":"skywalking-ui"},"name":"skywalking-ui"}}
  labels:
    app.kubernetes.io/instance: skywalking-ui
    app.kubernetes.io/name: skywalking-ui
  name: skywalking-ui
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: skywalking-ui
  name: skywalking-ui
  namespace: skywalking-ui
spec:
  ports:
  - port: 8080
    name: "8080"
    nodePort: 31081
    protocol: TCP
    targetPort: 8080
  selector:
    app: skywalking-ui
  type: NodePort
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: skywalking-ui
  namespace: skywalking-ui
  labels: 
    app: skywalking-ui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: skywalking-ui
  template:
    metadata:
      labels: 
        app: skywalking-ui
    spec:
      containers:
      - name: skywalking-ui
        image: apache/skywalking-ui:v9.4.0-java17
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
          name: "tcp-8080"
        env:
        - name: TZ
          value: "Asia/Shanghai"
        - name: SW_OAP_ADDRESS
          value: "http://skywalking-oap.skywalking-oap:12800"
        volumeMounts:
        - name: localtime
          mountPath: /etc/localtime
          readOnly: true
      volumes:
      - name: localtime
        hostPath:
          path: /etc/localtime