K8S部署rocketmq5全过程
背景
需要在开发环境部署rocketmq5验证新版本proxy相关的特性,而开发环境没有helm和外网,有k8s的环境
发现网上也没有太多资料,记录一下操作流程
操作流程
1. helm库拉取rocketmq5镜像
用的是某个大佬上传的helm库镜像:
## 添加 helm 仓库 helm repo add rocketmq-repo https://helm-charts.itboon.top/rocketmq helm repo update rocketmq-repo ## 查看镜像 helm search rocketmq 拉取镜像到本地 两个都拉 helm pull itboon/rocketmq helm pull itboon/rocketmq-cluster 解压 tar -zxf rocketmq.tgz
2. 单集群启动测试
进入目录修改value.yaml文件:
clusterName: "rocketmq-helm" image: repository: "apache/rocketmq" pullPolicy: IfNotPresent tag: "5.3.0" podSecurityContext: fsGroup: 3000 runAsUser: 3000 broker: size: master: 1 replica: 0 # podSecurityContext: {} # containerSecurityContext: {} master: brokerRole: ASYNC_MASTER jvm: maxHeapSize: 256M # javaOptsOverride: "" resources: limits: cpu: 2 memory: 512Mi requests: cpu: 200m memory: 256Mi replica: jvm: maxHeapSize: 256M # javaOptsOverride: "" resources: limits: cpu: 4 memory: 512Mi requests: cpu: 50m memory: 256Mi hostNetwork: false persistence: enabled: true size: 100Mi #storageClass: "local-storage" aclConfigMapEnabled: false aclConfig: | globalWhiteRemoteAddresses: - '*' - 10.*.*.* - 192.168.*.* config: ## brokerClusterName brokerName brokerRole brokerId 由内置脚本自动生成 deleteWhen: "04" fileReservedTime: "48" flushDiskType: "ASYNC_FLUSH" waitTimeMillsInSendQueue: "1000" # aclEnable: true affinityOverride: {} tolerations: [] nodeSelector: {} ## broker.readinessProbe readinessProbe: tcpSocket: port: main initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 6 nameserver: replicaCount: 1 jvm: maxHeapSize: 256M # javaOptsOverride: "" resources: limits: cpu: 2 memory: 256Mi ephemeral-storage: 256Mi requests: cpu: 100m memory: 256Mi ephemeral-storage: 256Mi persistence: enabled: false size: 256Mi #storageClass: "local-storage" affinityOverride: {} tolerations: [] nodeSelector: {} ## nameserver.readinessProbe readinessProbe: tcpSocket: port: main initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 6 ## nameserver.service service: annotations: {} type: ClusterIP proxy: enabled: true replicaCount: 1 jvm: maxHeapSize: 600M # javaOptsOverride: "" resources: limits: cpu: 2 memory: 512Mi requests: cpu: 100m memory: 256Mi affinityOverride: {} tolerations: [] nodeSelector: {} ## proxy.readinessProbe readinessProbe: tcpSocket: port: main initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 6 ## proxy.service service: annotations: {} type: ClusterIP dashboard: enabled: true replicaCount: 1 image: repository: "apacherocketmq/rocketmq-dashboard" pullPolicy: IfNotPresent tag: "1.0.0" auth: enabled: true users: - name: admin password: admin isAdmin: true - name: user01 password: userPass jvm: maxHeapSize: 256M resources: limits: cpu: 1 memory: 512Mi requests: cpu: 20m memory: 512Mi ## dashboard.readinessProbe readinessProbe: failureThreshold: 6 httpGet: path: / port: http livenessProbe: {} service: annotations: {} type: ClusterIP # nodePort: 31007 ingress: enabled: false className: "" annotations: {} # nginx.ingress.kubernetes.io/whitelist-source-range: 10.0.0.0/8,124.160.30.50 hosts: - host: rocketmq-dashboard.example.com tls: [] # - secretName: example-tls # hosts: # - rocketmq-dashboard.example.com ## controller mode is an experimental feature controllerModeEnabled: false controller: enabled: false jvm: maxHeapSize: 256M # javaOptsOverride: "" resources: limits: cpu: 2 memory: 512Mi requests: cpu: 100m memory: 256Mi persistence: enabled: true size: 256Mi accessModes: - ReadWriteOnce ## controller.service service: annotations: {} ## controller.config config: controllerDLegerGroup: group1 enableElectUncleanMaster: false notifyBrokerRoleChanged: true ## controller.readinessProbe readinessProbe: tcpSocket: port: main initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 6
helm启动
helm upgrade --install rocketmq \ --namespace rocketmq-demo \ --create-namespace \ --set broker.persistence.enabled="false" \ ./rocketmq
3. sc/pv配置
采用的挂载本地的方式设置:
SC:
vi sc_local.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-storage annotations: openebs.io/cas-type: local storageclass.kubernetes.io/is-default-class: "false" cas.openebs.io/config: | #hostpath type will create a PV by # creating a sub-directory under the # BASEPATH provided below. - name: StorageType value: "hostpath" #Specify the location (directory) where # where PV(volume) data will be saved. # A sub-directory with pv-name will be # created. When the volume is deleted, # the PV sub-directory will be deleted. #Default value is /var/openebs/local - name: BasePath value: "/tmp/storage" provisioner: openebs.io/local volumeBindingMode: Immediate reclaimPolicy: Retain kubectl apply -f sc_local.yaml
PV(只broker):
vi local_pv.yaml apiVersion: v1 kind: PersistentVolume metadata: labels: type: local name: broker-storage-rocketmq-broker-master-0 namespace: rocketmq-demo spec: accessModes: - ReadWriteOnce capacity: storage: 100Mi hostPath: path: /tmp/storage persistentVolumeReclaimPolicy: Recycle storageClassName: local-storage volumeMode: Filesystem --- apiVersion: v1 kind: PersistentVolume metadata: labels: type: local name: broker-storage-rocketmq-broker-replica-id1-0 namespace: rocketmq-demo spec: accessModes: - ReadWriteOnce capacity: storage: 100Mi hostPath: path: /tmp/storageSlave persistentVolumeReclaimPolicy: Recycle storageClassName: local-storage volumeMode: Filesystem kubectl apply -f local_pv.yaml kubectl delete pv --all
4.集群启动测试
修改value.yaml,主要降低了配置:
clusterName: "rocketmq-helm" nameOverride: rocketmq image: repository: "apache/rocketmq" pullPolicy: IfNotPresent tag: "5.3.0" podSecurityContext: fsGroup: 3000 runAsUser: 3000 broker: size: master: 1 replica: 1 # podSecurityContext: {} # containerSecurityContext: {} master: brokerRole: ASYNC_MASTER jvm: maxHeapSize: 512M # javaOptsOverride: "" resources: limits: cpu: 2 memory: 512Mi requests: cpu: 100m memory: 128Mi replica: jvm: maxHeapSize: 256M # javaOptsOverride: "" resources: limits: cpu: 2 memory: 256Mi requests: cpu: 50m memory: 128Mi hostNetwork: false persistence: enabled: true size: 100Mi #storageClass: "local-storage" aclConfigMapEnabled: false aclConfig: | globalWhiteRemoteAddresses: - '*' - 10.*.*.* - 192.168.*.* config: ## brokerClusterName brokerName brokerRole brokerId 由内置脚本自动生成 deleteWhen: "04" fileReservedTime: "48" flushDiskType: "ASYNC_FLUSH" waitTimeMillsInSendQueue: "1000" # aclEnable: true affinityOverride: {} tolerations: [] nodeSelector: {} ## broker.readinessProbe readinessProbe: tcpSocket: port: main initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 6 nameserver: replicaCount: 1 jvm: maxHeapSize: 256M # javaOptsOverride: "" resources: limits: cpu: 1 memory: 256Mi ephemeral-storage: 256Mi requests: cpu: 100m memory: 128Mi ephemeral-storage: 128Mi persistence: enabled: false size: 128Mi #storageClass: "local-storage" affinityOverride: {} tolerations: [] nodeSelector: {} ## nameserver.readinessProbe readinessProbe: tcpSocket: port: main initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 6 ## nameserver.service service: annotations: {} type: ClusterIP proxy: enabled: true replicaCount: 2 jvm: maxHeapSize: 512M # javaOptsOverride: "" resources: limits: cpu: 2 memory: 512Mi requests: cpu: 100m memory: 256Mi affinityOverride: {} tolerations: [] nodeSelector: {} ## proxy.readinessProbe readinessProbe: tcpSocket: port: main initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 6 ## proxy.service service: annotations: {} type: ClusterIP dashboard: enabled: false replicaCount: 1 image: repository: "apacherocketmq/rocketmq-dashboard" pullPolicy: IfNotPresent tag: "1.0.0" auth: enabled: true users: - name: admin password: admin isAdmin: true - name: user01 password: userPass jvm: maxHeapSize: 256M resources: limits: cpu: 1 memory: 256Mi requests: cpu: 20m memory: 128Mi ## dashboard.readinessProbe readinessProbe: failureThreshold: 6 httpGet: path: / port: http livenessProbe: {} service: annotations: {} type: ClusterIP # nodePort: 31007 ingress: enabled: false className: "" annotations: {} # nginx.ingress.kubernetes.io/whitelist-source-range: 10.0.0.0/8,124.160.30.50 hosts: - host: rocketmq-dashboard.example.com tls: [] # - secretName: example-tls # hosts: # - rocketmq-dashboard.example.com ## controller mode is an experimental feature controllerModeEnabled: false controller: enabled: false replicaCount: 3 jvm: maxHeapSize: 256M # javaOptsOverride: "" resources: limits: cpu: 2 memory: 256Mi requests: cpu: 100m memory: 128Mi persistence: enabled: true size: 128Mi accessModes: - ReadWriteOnce ## controller.service service: annotations: {} ## controller.config config: controllerDLegerGroup: group1 enableElectUncleanMaster: false notifyBrokerRoleChanged: true ## controller.readinessProbe readinessProbe: tcpSocket: port: main initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 6
5.离线安装
helm导出yaml文件:
helm template rocketmq ./rocketmq-cluster --output-dir ./rocketmq-cluster-yaml
注意,转成yaml文件后,原本用helm设置的namespace没了。
执行yaml文件验证:
kubectl apply -f rocketmq-cluster-yaml/ --recursive kubectl delete -f rocketmq-cluster-yaml/ --recursive
yaml导出:
## 安装传输工具 yum install lrzsz ## 打包yaml文件夹 tar czvf folder.tar.gz itboon sz folder.tar.gz
附录
最后生成的部署yaml:
- nameserver
--- # Source: rocketmq-cluster/templates/nameserver/statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: "rocketmq-nameserver" namespace: rocketmq labels: helm.sh/chart: rocketmq-cluster-12.3.2 app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq app.kubernetes.io/version: "5.3.0" app.kubernetes.io/managed-by: Helm spec: minReadySeconds: 20 replicas: 1 podManagementPolicy: Parallel selector: matchLabels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: nameserver serviceName: "rocketmq-nameserver-headless" template: metadata: annotations: checksum/config: 9323bc706d85f980c210e9823264a63548598b649c4935f9db6559d4fecbcc93 labels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: nameserver spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 5 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: nameserver topologyKey: kubernetes.io/hostname securityContext: fsGroup: 3000 runAsUser: 3000 containers: - name: nameserver image: "apache/rocketmq:5.3.0" imagePullPolicy: IfNotPresent command: - sh - /mq-server-start.sh env: - name: ROCKETMQ_PROCESS_ROLE value: nameserver - name: ROCKETMQ_JAVA_OPTIONS_HEAP value: -Xms512M -Xmx512M ports: - containerPort: 9876 name: main protocol: TCP resources: limits: cpu: 1 ephemeral-storage: 512Mi memory: 512Mi requests: cpu: 100m ephemeral-storage: 256Mi memory: 256Mi readinessProbe: failureThreshold: 6 initialDelaySeconds: 10 periodSeconds: 10 tcpSocket: port: main timeoutSeconds: 3 lifecycle: preStop: exec: command: ["sh", "-c", "sleep 5; ./mqshutdown namesrv"] volumeMounts: - mountPath: /mq-server-start.sh name: mq-server-start-sh subPath: mq-server-start.sh - mountPath: /etc/rocketmq/base-cm name: base-cm - mountPath: /home/rocketmq/logs name: nameserver-storage subPath: logs dnsPolicy: ClusterFirst terminationGracePeriodSeconds: 15 volumes: - configMap: items: - key: mq-server-start.sh path: mq-server-start.sh name: rocketmq-server-config defaultMode: 0755 name: mq-server-start-sh - configMap: name: rocketmq-server-config name: base-cm - name: nameserver-storage emptyDir: {}
--- # Source: rocketmq-cluster/templates/nameserver/svc.yaml apiVersion: v1 kind: Service metadata: name: rocketmq-nameserver labels: helm.sh/chart: rocketmq-cluster-12.3.2 app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq app.kubernetes.io/version: "5.3.0" app.kubernetes.io/managed-by: Helm component: nameserver spec: ports: - port: 9876 protocol: TCP targetPort: 9876 selector: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: nameserver type: "ClusterIP"
--- # Source: rocketmq-cluster/templates/nameserver/svc-headless.yaml apiVersion: v1 kind: Service metadata: name: "rocketmq-nameserver-headless" labels: helm.sh/chart: rocketmq-cluster-12.3.2 app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq app.kubernetes.io/version: "5.3.0" app.kubernetes.io/managed-by: Helm component: nameserver spec: clusterIP: "None" publishNotReadyAddresses: true ports: - port: 9876 protocol: TCP targetPort: 9876 selector: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: nameserver
- broker
--- # Source: rocketmq-cluster/templates/broker/statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: rocketmq-broker-master namespace: rocketmq labels: helm.sh/chart: rocketmq-cluster-12.3.2 app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq app.kubernetes.io/version: "5.3.0" app.kubernetes.io/managed-by: Helm spec: minReadySeconds: 20 replicas: 1 podManagementPolicy: OrderedReady selector: matchLabels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: broker broker: rocketmq-broker-master serviceName: "" template: metadata: annotations: checksum/config: 9323bc706d85f980c210e9823264a63548598b649c4935f9db6559d4fecbcc93 labels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: broker broker: rocketmq-broker-master spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 5 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: broker topologyKey: kubernetes.io/hostname securityContext: fsGroup: 3000 runAsUser: 3000 containers: - name: broker image: "apache/rocketmq:5.3.0" imagePullPolicy: IfNotPresent command: - sh - /mq-server-start.sh env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: ROCKETMQ_PROCESS_ROLE value: broker - name: NAMESRV_ADDR value: rocketmq-nameserver-0.rocketmq-nameserver-headless.rocketmq.svc:9876 - name: ROCKETMQ_CONF_brokerId value: "0" - name: ROCKETMQ_CONF_brokerRole value: "ASYNC_MASTER" - name: ROCKETMQ_JAVA_OPTIONS_HEAP value: -Xms1G -Xmx1G ports: - containerPort: 10909 name: vip protocol: TCP - containerPort: 10911 name: main protocol: TCP - containerPort: 10912 name: ha protocol: TCP resources: limits: cpu: 2 memory: 2Gi requests: cpu: 100m memory: 512Mi readinessProbe: failureThreshold: 6 initialDelaySeconds: 10 periodSeconds: 10 tcpSocket: port: main timeoutSeconds: 3 lifecycle: preStop: exec: command: ["sh", "-c", "sleep 5; ./mqshutdown broker"] volumeMounts: - mountPath: /home/rocketmq/logs name: broker-storage subPath: rocketmq-broker/logs - mountPath: /home/rocketmq/store name: broker-storage subPath: rocketmq-broker/store - mountPath: /etc/rocketmq/broker-base.conf name: broker-base-config subPath: broker-base.conf - mountPath: /mq-server-start.sh name: mq-server-start-sh subPath: mq-server-start.sh dnsPolicy: ClusterFirst terminationGracePeriodSeconds: 30 volumes: - configMap: items: - key: broker-base.conf path: broker-base.conf name: rocketmq-server-config name: broker-base-config - configMap: items: - key: mq-server-start.sh path: mq-server-start.sh name: rocketmq-server-config defaultMode: 0755 name: mq-server-start-sh volumeClaimTemplates: - metadata: name: broker-storage spec: accessModes: - ReadWriteOnce storageClassName: local-path resources: requests: storage: "100Mi" --- # Source: rocketmq-cluster/templates/broker/statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: rocketmq-broker-replica-id1 namespace: rocketmq labels: helm.sh/chart: rocketmq-cluster-12.3.2 app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq app.kubernetes.io/version: "5.3.0" app.kubernetes.io/managed-by: Helm spec: minReadySeconds: 20 replicas: 1 podManagementPolicy: OrderedReady selector: matchLabels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: broker broker: rocketmq-broker-replica-id1 serviceName: "" template: metadata: annotations: checksum/config: 9323bc706d85f980c210e9823264a63548598b649c4935f9db6559d4fecbcc93 labels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: broker broker: rocketmq-broker-replica-id1 spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 5 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: broker topologyKey: kubernetes.io/hostname securityContext: fsGroup: 3000 runAsUser: 3000 containers: - name: broker image: "apache/rocketmq:5.3.0" imagePullPolicy: IfNotPresent command: - sh - /mq-server-start.sh env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: ROCKETMQ_PROCESS_ROLE value: broker - name: NAMESRV_ADDR value: rocketmq-nameserver-0.rocketmq-nameserver-headless.rocketmq.svc:9876 - name: ROCKETMQ_CONF_brokerId value: "1" - name: ROCKETMQ_CONF_brokerRole value: "SLAVE" - name: ROCKETMQ_JAVA_OPTIONS_HEAP value: -Xms1G -Xmx1G ports: - containerPort: 10909 name: vip protocol: TCP - containerPort: 10911 name: main protocol: TCP - containerPort: 10912 name: ha protocol: TCP resources: limits: cpu: 2 memory: 1Gi requests: cpu: 50m memory: 512Mi readinessProbe: failureThreshold: 6 initialDelaySeconds: 10 periodSeconds: 10 tcpSocket: port: main timeoutSeconds: 3 lifecycle: preStop: exec: command: ["sh", "-c", "sleep 5; ./mqshutdown broker"] volumeMounts: - mountPath: /home/rocketmq/logs name: broker-storage subPath: rocketmq-broker/logs - mountPath: /home/rocketmq/store name: broker-storage subPath: rocketmq-broker/store - mountPath: /etc/rocketmq/broker-base.conf name: broker-base-config subPath: broker-base.conf - mountPath: /mq-server-start.sh name: mq-server-start-sh subPath: mq-server-start.sh dnsPolicy: ClusterFirst terminationGracePeriodSeconds: 30 volumes: - configMap: items: - key: broker-base.conf path: broker-base.conf name: rocketmq-server-config name: broker-base-config - configMap: items: - key: mq-server-start.sh path: mq-server-start.sh name: rocketmq-server-config defaultMode: 0755 name: mq-server-start-sh volumeClaimTemplates: - metadata: name: broker-storage spec: accessModes: - ReadWriteOnce storageClassName: local-path resources: requests: storage: "100Mi"
- proxy
--- # Source: rocketmq-cluster/templates/proxy/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: "rocketmq-proxy" namespace: rocketmq labels: helm.sh/chart: rocketmq-cluster-12.3.2 app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq app.kubernetes.io/version: "5.3.0" app.kubernetes.io/managed-by: Helm spec: minReadySeconds: 20 replicas: 2 selector: matchLabels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: proxy template: metadata: annotations: checksum/config: 9323bc706d85f980c210e9823264a63548598b649c4935f9db6559d4fecbcc93 labels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: proxy spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 5 podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: proxy topologyKey: kubernetes.io/hostname securityContext: fsGroup: 3000 runAsUser: 3000 containers: - name: proxy image: "apache/rocketmq:5.3.0" imagePullPolicy: IfNotPresent command: - sh - /mq-server-start.sh env: - name: NAMESRV_ADDR value: rocketmq-nameserver-0.rocketmq-nameserver-headless.rocketmq.svc:9876 - name: ROCKETMQ_PROCESS_ROLE value: proxy - name: RMQ_PROXY_CONFIG_PATH value: /etc/rocketmq/proxy.json - name: ROCKETMQ_JAVA_OPTIONS_HEAP value: -Xms1G -Xmx1G ports: - name: main containerPort: 8080 protocol: TCP - name: grpc containerPort: 8081 protocol: TCP resources: limits: cpu: 2 memory: 1Gi requests: cpu: 100m memory: 512Mi readinessProbe: failureThreshold: 6 initialDelaySeconds: 10 periodSeconds: 10 tcpSocket: port: main timeoutSeconds: 3 lifecycle: preStop: exec: command: ["sh", "-c", "sleep 5; ./mqshutdown proxy"] volumeMounts: - mountPath: /mq-server-start.sh name: mq-server-start-sh subPath: mq-server-start.sh - mountPath: /etc/rocketmq/proxy.json name: proxy-json subPath: proxy.json dnsPolicy: ClusterFirst terminationGracePeriodSeconds: 15 volumes: - configMap: items: - key: mq-server-start.sh path: mq-server-start.sh name: rocketmq-server-config defaultMode: 0755 name: mq-server-start-sh - configMap: items: - key: proxy.json path: proxy.json name: rocketmq-server-config name: proxy-json
--- # Source: rocketmq-cluster/templates/proxy/service.yaml apiVersion: v1 kind: Service metadata: name: rocketmq-proxy labels: helm.sh/chart: rocketmq-cluster-12.3.2 app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq app.kubernetes.io/version: "5.3.0" app.kubernetes.io/managed-by: Helm component: proxy spec: ports: - port: 8080 name: main protocol: TCP targetPort: 8080 - port: 8081 name: grpc protocol: TCP targetPort: 8081 selector: app.kubernetes.io/name: rocketmq app.kubernetes.io/instance: rocketmq component: proxy type: "ClusterIP"
- configmap
--- # Source: rocketmq-cluster/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: rocketmq-server-config namespace: rocketmq data: broker-base.conf: | deleteWhen = 04 fileReservedTime = 48 flushDiskType = ASYNC_FLUSH waitTimeMillsInSendQueue = 1000 brokerClusterName = rocketmq-helm controller-base.conf: | controllerDLegerGroup = group1 enableElectUncleanMaster = false notifyBrokerRoleChanged = true controllerDLegerPeers = n0-rocketmq-controller-0.rocketmq-controller.rocketmq.svc:9878;n1-rocketmq-controller-1.rocketmq-controller.rocketmq.svc:9878;n2-rocketmq-controller-2.rocketmq-controller.rocketmq.svc:9878 controllerStorePath = /home/rocketmq/controller-data proxy.json: | { "rocketMQClusterName": "rocketmq-helm" } mq-server-start.sh: | java -version if [ $? -ne 0 ]; then echo "[ERROR] Missing java runtime" exit 50 fi if [ -z "${ROCKETMQ_HOME}" ]; then echo "[ERROR] Missing env ROCKETMQ_HOME" exit 50 fi if [ -z "${ROCKETMQ_PROCESS_ROLE}" ]; then echo "[ERROR] Missing env ROCKETMQ_PROCESS_ROLE" exit 50 fi export JAVA_HOME=$(dirname $(dirname $(readlink -f $(which java)))) export CLASSPATH=".:${ROCKETMQ_HOME}/conf:${ROCKETMQ_HOME}/lib/*:${CLASSPATH}" JAVA_OPT="${JAVA_OPT} -server" if [ -n "$ROCKETMQ_JAVA_OPTIONS_OVERRIDE" ]; then JAVA_OPT="${JAVA_OPT} ${ROCKETMQ_JAVA_OPTIONS_OVERRIDE}" else JAVA_OPT="${JAVA_OPT} -XX:+UseG1GC" JAVA_OPT="${JAVA_OPT} ${ROCKETMQ_JAVA_OPTIONS_EXT}" JAVA_OPT="${JAVA_OPT} ${ROCKETMQ_JAVA_OPTIONS_HEAP}" fi JAVA_OPT="${JAVA_OPT} -cp ${CLASSPATH}" export BROKER_CONF_FILE="$HOME/broker.conf" export CONTROLLER_CONF_FILE="$HOME/controller.conf" update_broker_conf() { local key=$1 local value=$2 sed -i "/^${key} *=/d" ${BROKER_CONF_FILE} echo "${key} = ${value}" >> ${BROKER_CONF_FILE} } init_broker_role() { if [ "${ROCKETMQ_CONF_brokerRole}" = "SLAVE" ]; then update_broker_conf "brokerRole" "SLAVE" elif [ "${ROCKETMQ_CONF_brokerRole}" = "SYNC_MASTER" ]; then update_broker_conf "brokerRole" "SYNC_MASTER" else update_broker_conf "brokerRole" "ASYNC_MASTER" fi if echo "${ROCKETMQ_CONF_brokerId}" | grep -E '^[0-9]+$'; then update_broker_conf "brokerId" "${ROCKETMQ_CONF_brokerId}" fi } init_broker_conf() { rm -f ${BROKER_CONF_FILE} cp /etc/rocketmq/broker-base.conf ${BROKER_CONF_FILE} echo "" >> ${BROKER_CONF_FILE} echo "# generated config" >> ${BROKER_CONF_FILE} broker_name_seq=${HOSTNAME##*-} if [ -n "$MY_POD_NAME" ]; then broker_name_seq=${MY_POD_NAME##*-} fi update_broker_conf "brokerName" "broker-g${broker_name_seq}" if [ "$enableControllerMode" != "true" ]; then init_broker_role fi echo "[exec] cat ${BROKER_CONF_FILE}" cat ${BROKER_CONF_FILE} } init_acl_conf() { if [ -f /etc/rocketmq/acl/plain_acl.yml ]; then rm -f "${ROCKETMQ_HOME}/conf/plain_acl.yml" ln -sf "/etc/rocketmq/acl" "${ROCKETMQ_HOME}/conf/acl" fi } init_controller_conf() { rm -f ${CONTROLLER_CONF_FILE} cp /etc/rocketmq/base-cm/controller-base.conf ${CONTROLLER_CONF_FILE} controllerDLegerSelfId="n${HOSTNAME##*-}" if [ -n "$MY_POD_NAME" ]; then controllerDLegerSelfId="n${MY_POD_NAME##*-}" fi sed -i "/^controllerDLegerSelfId *=/d" ${CONTROLLER_CONF_FILE} echo "controllerDLegerSelfId = ${controllerDLegerSelfId}" >> ${CONTROLLER_CONF_FILE} cat ${CONTROLLER_CONF_FILE} } if [ "$ROCKETMQ_PROCESS_ROLE" = "broker" ]; then init_broker_conf init_acl_conf set -x java ${JAVA_OPT} org.apache.rocketmq.broker.BrokerStartup -c ${BROKER_CONF_FILE} elif [ "$ROCKETMQ_PROCESS_ROLE" = "controller" ]; then init_controller_conf set -x java ${JAVA_OPT} org.apache.rocketmq.controller.ControllerStartup -c ${CONTROLLER_CONF_FILE} elif [ "$ROCKETMQ_PROCESS_ROLE" = "nameserver" ] || [ "$ROCKETMQ_PROCESS_ROLE" = "mqnamesrv" ]; then set -x if [ "$enableControllerInNamesrv" = "true" ]; then init_controller_conf java ${JAVA_OPT} org.apache.rocketmq.namesrv.NamesrvStartup -c ${CONTROLLER_CONF_FILE} else java ${JAVA_OPT} org.apache.rocketmq.namesrv.NamesrvStartup fi elif [ "$ROCKETMQ_PROCESS_ROLE" = "proxy" ]; then set -x if [ -f $RMQ_PROXY_CONFIG_PATH ]; then java ${JAVA_OPT} org.apache.rocketmq.proxy.ProxyStartup -pc $RMQ_PROXY_CONFIG_PATH else java ${JAVA_OPT} org.apache.rocketmq.proxy.ProxyStartup fi else echo "[ERROR] Missing env ROCKETMQ_PROCESS_ROLE" exit 50 fi
踩坑
- 存储权限问题
配置完pv后启动一直报错。
查日志:
kubectl describe pod rocketmq-broker-master-0 -n rocketmq-demo
结果:
查应用启动日志:
kubectl logs rocketmq-broker-master-0 -n rocketmq-demo
结果:
具体错误信息:
03:30:58,822 |-ERROR in org.apache.rocketmq.logging.ch.qos.logback.core.rolling.RollingFileAppender[RocketmqAuthAuditAppender_inner] - Failed to create parent directories for [/home/rocketmq/logs/rocketmqlogs/auth_audit.log]
03:30:58,822 |-ERROR in org.apache.rocketmq.logging.ch.qos.logback.core.rolling.RollingFileAppender[RocketmqAuthAuditAppender_inner] - openFile(/home/rocketmq/logs/rocketmqlogs///auth_audit.log,true) call failed. java.io.FileNotFoundException: /home/rocketmq/logs/rocketmqlogs/auth_audit.log (No such file or directory)
at java.io.FileNotFoundException: /home/rocketmq/logs/rocketmqlogs/auth_audit.log (No such file or directory)
java.lang.NullPointerException
at org.apache.rocketmq.broker.schedule.ScheduleMessageService.configFilePath(ScheduleMessageService.java:272)
at org.apache.rocketmq.common.ConfigManager.persist(ConfigManager.java:83)
at org.apache.rocketmq.broker.BrokerController.shutdownBasicService(BrokerController.java:1478)
at org.apache.rocketmq.broker.BrokerController.shutdown(BrokerController.java:1565)
at org.apache.rocketmq.broker.BrokerStartup.createBrokerController(BrokerStartup.java:250)
at org.apache.rocketmq.broker.BrokerStartup.main(BrokerStartup.java:52)
网上查了下是挂载的本地目录,pod没有权限读写。
解决的方式:
1、移出root目录
由于是root用户账号,k8s启动用的kubectl账号,把挂载的目录移到了/tmp,修改上文PV文件。
2、提前创建PV目录
在/tmp目录下创建/tmp/storage,不然启动会报PVC没有该目录
3、chmod开启目录及子目录的读写权限,需要带上-R递归修改所有子目录
chmod -R 777 storage
- 主从副本问题
修改完文件权限启动后又报以下错误:
而且Broker一主一从,一个正常启动,另一个报这个错误。网上查了下,正常会出现在同一个机器部署了两个Broker的情况下。
但我们的环境是K8S集群,节点之间理应是隔离的,所以猜想是storage挂载了同一个目录的问题,修改PV,两个PV挂载的目录不同,改为storageSlave。再次启动后成功。
- 命名空间问题
转移到开发环境后报,启动Broker和Namesrver正常,但proxy启动不了,报:
org.apache.rocketmq.proxy.common.ProxyException: create system broadcast topic DefaultHeartBeatSyncerTopic failed on cluster rocketmq-helm
在本地环境转完yaml启动时没报过。
网上查了下,如果broker没正确配置nameserver,也会报这个错误。怀疑是环境变了后,某些配置需要根据环境修改。把目录下的配置都仔细研究了下,尤其涉及broker和proxy的nameserver地址的。
还有个地方有差异,由于开发环境多人共用,有许多的应用在跑,而导出的yaml文件会默认在K8S的default namespace启动pod,容易造成混乱和不好管理。所以尝试在yaml文件中加入了namespace:rocketmq。
最后排查确实由于这个导致,在proxy和broker的配置文件中,还有这句读取nameserver地址的语句:
需要将其中的:
value: rocketmq-nameserver-0.rocketmq-nameserver-headless.default.svc:9876
改为:
value: rocketmq-nameserver-0.rocketmq-nameserver-headless.rocketmq.svc:9876
这个环境变量提供了 RocketMQ NameServer 的地址和端口。rocketmq-nameserver-0.rocketmq-nameserver-headless.default.svc 是 NameServer Pod 的 DNS 名称,9876 是 NameServer 服务的端口。这个地址用于客户端或 Broker 连接到 NameServer,以便进行服务发现和元数据同步。
其中,rocketmq-nameserver-0是当前nameserver的name,rocketmq-nameserver-headless对应headless service的name,default对应namespace。所以部署新的K8S命名空间后,需要也把这里的default改为rocketmq的namespace,否则就会报找不到无法创建topit的错误。不过,这里挺奇怪的,broker能正常启动,只有启动proxy的时候才会报这个错误,估计Rocketmq5新版本做了什么修改。
看了别人debug启动源码,指出是这里的问题:
在BrokerStartup.java阅读发现这个是namesrv的地址,如果不添加的话,会导致即使你启动了broker,但其实并不会在namesrv上有任何注册信息。
如果不配置会发生什么呢,主要体现在proxy启动的时候,就一定会报错
create system broadcast topic DefaultHeartBeatSyncerTopic failed on cluster DefaultCluster
总结
以上为个人经验,希望能给大家一个参考,也希望大家多多支持脚本之家。
相关文章
Rainbond云原生快捷部署生产可用的Gitlab步骤详解
这篇文章主要为大家介绍了Rainbond云原生快捷部署生产可用的Gitlab步骤详解,有需要的朋友可以借鉴参考下,希望能够有所帮助,祝大家多多进步,早日升职加薪2022-04-04Kubernetes如何限制不同团队只能访问各自namespace实现
这篇文章主要为大家介绍了Kubernetes如何限制不同团队只能访问各自namespace实现详解,有需要的朋友可以借鉴参考下,希望能够有所帮助,祝大家多多进步,早日升职加薪2023-04-04kubernetes之statefulset搭建MySQL集群
这篇文章主要为大家介绍了kubernetes之statefulset搭建MySQL集群示例详解,有需要的朋友可以借鉴参考下,希望能够有所帮助,祝大家多多进步,早日升职加薪2023-04-04
最新评论