• k8s helm Seata1.5.1


     部署方式:可以看这篇:

    k8s helm Seata1.5.1_hunheidaode的博客-CSDN博客4.我只是改动了values.yaml与templates下的deployment.yaml。在上面添加了些许改动,只不过是用helm部署了,其余的操作还是跟上篇博文一样子的。3.下载seta helm部署yaml文件:文件位于:不知道的可看上篇博文。把/seata-server/resources 下的文件挂载到。7.访问nacos 看到已经被注册到nacos中。2.环境:k8s 、nfs环境 或是动态存储。seataConfigName:后面的内容。/data/k8s/resource下。......https://blog.csdn.net/hunheidaode/article/details/126623672

    helm-charthttps://heidaodageshiwo.github.io/helm-chart/

    1.当前文章是基于上一篇文章写的:

    k8s Seata1.5.1_hunheidaode的博客-CSDN博客

    在上面添加了些许改动,只不过是用helm部署了,其余的操作还是跟上篇博文一样子的。

    2.环境:k8s  、nfs环境 或是动态存储

     nfs:

    3.下载seta helm部署yaml文件:文件位于:不知道的可看上篇博文。

     4.我只是改动了values.yaml与templates下的deployment.yaml

    deployment.yaml 改动了最后几行:

    有原来hostpath改为NFS存储:

    完整配置文件:

    1. apiVersion: apps/v1
    2. kind: Deployment
    3. metadata:
    4. {{- if .Values.namespace }}
    5. namespace: {{ .Values.namespace }}
    6. {{- end}}
    7. name: {{ include "seata-server.name" . }}
    8. labels:
    9. {{ include "seata-server.labels" . | indent 4 }}
    10. spec:
    11. replicas: {{ .Values.replicaCount }}
    12. selector:
    13. matchLabels:
    14. app.kubernetes.io/name: {{ include "seata-server.name" . }}
    15. app.kubernetes.io/instance: {{ .Release.Name }}
    16. template:
    17. metadata:
    18. labels:
    19. app.kubernetes.io/name: {{ include "seata-server.name" . }}
    20. app.kubernetes.io/instance: {{ .Release.Name }}
    21. spec:
    22. containers:
    23. - name: {{ .Chart.Name }}
    24. image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
    25. imagePullPolicy: {{ .Values.image.pullPolicy }}
    26. ports:
    27. - name: http
    28. containerPort: 8091
    29. protocol: TCP
    30. {{- if .Values.volume }}
    31. volumeMounts:
    32. {{- range .Values.volume }}
    33. - name: {{ .name }}
    34. mountPath: {{ .mountPath }}
    35. {{- end}}
    36. {{- end}}
    37. {{- if .Values.env }}
    38. env:
    39. {{- if .Values.env.seataIp }}
    40. - name: SEATA_IP
    41. value: {{ .Values.env.seataIp | quote }}
    42. {{- end }}
    43. {{- if .Values.env.seataPort }}
    44. - name: SEATA_PORT
    45. value: {{ .Values.env.seataPort | quote }}
    46. {{- end }}
    47. {{- if .Values.env.seataEnv }}
    48. - name: SEATA_ENV
    49. value: {{ .Values.env.seataEnv }}
    50. {{- end }}
    51. {{- if .Values.env.seataConfigName }}
    52. - name: SEATA_CONFIG_NAME
    53. value: {{ .Values.env.seataConfigName }}
    54. {{- end }}
    55. {{- if .Values.env.serverNode }}
    56. - name: SERVER_NODE
    57. value: {{ .Values.env.serverNode | quote }}
    58. {{- end }}
    59. {{- if .Values.env.storeMode }}
    60. - name: STORE_MODE
    61. value: {{ .Values.env.storeMode }}
    62. {{- end }}
    63. {{- end }}
    64. {{- if .Values.volume }}
    65. volumes:
    66. {{- range .Values.volume }}
    67. - name: {{ .name }}
    68. nfs:
    69. server: {{ .nfsServers }}
    70. path: {{ .nfsPath }}
    71. {{- end}}
    72. {{- end}}

    只是改动的这块:

    1. {{- if .Values.volume }}
    2. volumes:
    3. {{- range .Values.volume }}
    4. - name: {{ .name }}
    5. nfs:
    6. server: {{ .nfsServers }}
    7. path: {{ .nfsPath }}
    8. {{- end}}
    9. {{- end}}

    values.yaml 完整:

    1. replicaCount: 1
    2. namespace: default
    3. image:
    4. repository: seataio/seata-server
    5. tag: 1.5.1
    6. pullPolicy: IfNotPresent
    7. service:
    8. type: NodePort
    9. port: 30091
    10. nodePort: 30091
    11. env:
    12. seataPort: "8091"
    13. storeMode: "file"
    14. seataIp: "192.168.56.211"
    15. seataConfigName: "file:/root/seata-config/registry/registry.conf"
    16. volume:
    17. - name: seata-config
    18. mountPath: /seata-server/resources
    19. nfsServers: 192.168.56.211
    20. nfsPath: /data/k8s/resource

    env的环境之前配置过但是没有起作用,可以不用写:

    seataConfigName:后面的内容
    把/seata-server/resources 下的文件挂载到
    /data/k8s/resource下

    把上篇博文的文件下的文件修改后上传到 /data/k8s/resource文件下:

    application.yaml 也与上篇博文一样:

    1. server:
    2. port: 7091
    3. spring:
    4. application:
    5. name: seata-server
    6. logging:
    7. config: classpath:logback-spring.xml
    8. file:
    9. path: ${user.home}/logs/seata
    10. extend:
    11. logstash-appender:
    12. destination: 127.0.0.1:4560
    13. kafka-appender:
    14. bootstrap-servers: 127.0.0.1:9092
    15. topic: logback_to_logstash
    16. console:
    17. user:
    18. username: seata
    19. password: seata
    20. seata:
    21. config:
    22. # support: nacos 、 consul 、 apollo 、 zk 、 etcd3
    23. type: nacos
    24. nacos:
    25. server-addr: 192.168.56.211:30683
    26. namespace: public
    27. group: SEATA_GROUP
    28. username: nacos
    29. password: nacos
    30. ##if use MSE Nacos with auth, mutex with username/password attribute
    31. #access-key: ""
    32. #secret-key: ""
    33. data-id: seata.properties
    34. registry:
    35. # support: nacos 、 eureka 、 redis 、 zk 、 consul 、 etcd3 、 sofa
    36. type: nacos
    37. preferred-networks: 192.168.*
    38. nacos:
    39. application: seata-server
    40. server-addr: 192.168.56.211:30683
    41. group: SEATA_GROUP
    42. namespace: public
    43. cluster: default
    44. username: nacos
    45. password: nacos
    46. ##if use MSE Nacos with auth, mutex with username/password attribute
    47. #access-key: ""
    48. #secret-key: ""
    49. store:
    50. # support: file 、 db 、 redis
    51. mode: db
    52. session:
    53. mode: db
    54. lock:
    55. mode: db
    56. file:
    57. dir: sessionStore
    58. max-branch-session-size: 16384
    59. max-global-session-size: 512
    60. file-write-buffer-cache-size: 16384
    61. session-reload-read-size: 100
    62. flush-disk-mode: async
    63. db:
    64. datasource: druid
    65. db-type: mysql
    66. driver-class-name: com.mysql.jdbc.Driver
    67. url: jdbc:mysql://192.168.56.211:31306/seata?rewriteBatchedStatements=true
    68. user: root
    69. password: s00J8De852
    70. min-conn: 5
    71. max-conn: 100
    72. global-table: global_table
    73. branch-table: branch_table
    74. lock-table: lock_table
    75. distributed-lock-table: distributed_lock
    76. query-limit: 100
    77. max-wait: 5000
    78. security:
    79. secretKey: SeataSecretKey0c382ef121d778043159209298fd40bf3850a017
    80. tokenValidityInMilliseconds: 1800000
    81. ignore:
    82. urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/api/v1/auth/login

    5.直接启动 

    cd /root/seata 

    helm install  seata ./seata-server

    6.查看

    7.访问nacos 看到已经被注册到nacos中。

    8.seata控制台可视化界面:

    当看到控制台一直报错时,我下载下seata1.5.1的源码下来,自己重新打包打镜像,搞了一下午!说实话各种报错!

    搞一下午就是起不来!也没有报错!索性不搞了!

    9.可视化界面展示

    helm  sevice.yaml文件修改:

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: {{ include "seata-server.fullname" . }}
    5. labels:
    6. {{ include "seata-server.labels" . | indent 4 }}
    7. spec:
    8. type: {{ .Values.service.type }}
    9. ports:
    10. - port: {{ .Values.service.port }}
    11. targetPort: {{ .Values.service.port }}
    12. protocol: TCP
    13. name: http
    14. selector:
    15. app.kubernetes.io/name: {{ include "seata-server.name" . }}
    16. app.kubernetes.io/instance: {{ .Release.Name }}

    把targetPort: http  改为:

          targetPort: {{ .Values.service.port }}

    暴露出来:

    然后:values.yaml

    1. replicaCount: 1
    2. namespace: default
    3. image:
    4. repository: seataio/seata-server
    5. #repository: library/seata-server
    6. tag: 1.5.1
    7. pullPolicy: IfNotPresent
    8. service:
    9. type: NodePort
    10. port: 7091
    11. env:
    12. seataPort: "8091"
    13. storeMode: "file"
    14. seataIp: "192.168.56.211"
    15. seataConfigName: "xxx"
    16. volume:
    17. - name: seata-config
    18. mountPath: /seata-server/resources
    19. nfsServers: 192.168.56.211
    20. nfsPath: /data/k8s/resource

    启动:

     访问:ip:32542端口即可:

    但是有错误 列表出不来:

    我又在自己的本地启动:控制台还是包mysql的错误但是不影响

     也注册到nacos中了:

    本地访问也是报错的:

    这个问题不知道咋回事!,日志太多,看不到具体报错!但是确实是部署起来了!

    10.可视化界面问题补充(如果你也是出现了这样的错误,无非就是你也是使用k8s部署的nacos或者是使用helm部署的nacos,你就会出现上面我说的这种情况!怎么解决呢!)

    本人已经解决:具体请看nacos  issues

    helm 部署 nacos2.1.0 代码不能访问,ui可以访问 · Issue #9075 · alibaba/nacos · GitHub

    这是我从部署到访问nacos到注册的问题,大家可以参考(前提:你是使用k8s或者是helm部署的nacos,绝对会出现这个问题!)

    11.最终解决方案:

    本人已经试过好多个方法:

    1.首先把mysql换成了5.7:

    2。目前helm部署seata并且能提供对java的访问。这块我还没有解决。

    比较欣慰的是 k8s部署seata,java可以访问。这个问题我解决了。

    我目前这种方式在csdn你是找不到的。

    有人说官方不是提供了部署文档么,端口都代理出来了。那你们就去用用试试,用代码去连接试试。能成功的话就不用看我写的这个了。不行的话那就往下看呗。

    完整k8s部署 yaml:

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: seata-server
    5. namespace: default
    6. labels:
    7. k8s-app: seata-server
    8. spec:
    9. type: ClusterIP
    10. ports:
    11. - port: 8091
    12. targetPort: 8091
    13. protocol: TCP
    14. name: http
    15. selector:
    16. k8s-app: seata-server
    17. sessionAffinity: None
    18. ---
    19. apiVersion: apps/v1
    20. kind: Deployment
    21. metadata:
    22. name: seata-server
    23. namespace: default
    24. labels:
    25. k8s-app: seata-server
    26. spec:
    27. replicas: 1
    28. selector:
    29. matchLabels:
    30. k8s-app: seata-server
    31. template:
    32. metadata:
    33. labels:
    34. k8s-app: seata-server
    35. spec:
    36. containers:
    37. - name: seata-server
    38. image: docker.io/seataio/seata-server:1.5.1
    39. imagePullPolicy: IfNotPresent
    40. ports:
    41. - name: http
    42. containerPort: 8091
    43. protocol: TCP
    44. env:
    45. - name: SEATA_IP
    46. value: 192.168.56.211
    47. - name: SEATA_PORT
    48. value: "31113"
    49. volumeMounts:
    50. - name: seata-config
    51. mountPath: /seata-server/resources
    52. volumes:
    53. - name: seata-config
    54. hostPath:
    55. path: /root/resources
    56. ---
    57. apiVersion: v1
    58. kind: Service
    59. metadata:
    60. name: seata-server-testzhangqiang
    61. spec:
    62. selector:
    63. k8s-app: seata-server
    64. ports:
    65. - name: client-port1
    66. port: 7091
    67. protocol: TCP
    68. targetPort: 7091
    69. nodePort: 31005
    70. - name: bus-port1
    71. port: 31113
    72. protocol: TCP
    73. targetPort: 31113
    74. nodePort: 31113
    75. type: NodePort
    76. ---

    主要改动是参照helm  ENV传环境变量的方式进行部署。

     

    为什么要代理31113这个端口,为什么env里面要写31113这个端口。

    因为:

    你重启项目的时候 项目是连接这个端口的。

    如果你是windows启动的,那么你平常连接的端口是:8091 端口。

    但是你是k8s部署的nacos,你本机有8091端口么。没有!所以代码他连接不上!

    所以要把这个31113端口写进去,并代理出来,让本地代码能够访问这个端口。

    你们代码还有配置需要有以下改动:

    nacos .seata配置修改为:

    查看注册信息:

              env:
                - name: SEATA_IP
                  value: 192.168.56.211
                - name: SEATA_PORT
                  value: "31113" 

    这个192.168.56.211是虚拟机ip

    检查注册地址与ip 一定要对应上:

     代码:

    我是严格按照官网依赖来的。nacos2.1  seata1.5.1(我使用mysql5.7数据库,mysql8的数据库搞了2周都没搞起来)

    application.yml(feign配置要添加不然 会  read  time out 新版本是这样配置,搞了好久)

    1. # 数据源
    2. spring:
    3. datasource:
    4. username: root
    5. password: 123456
    6. url: jdbc:mysql://192.168.56.213:31306/seata_order?characterEncoding=utf8&useSSL=false&serverTimezone=UTC
    7. driver-class-name: com.mysql.jdbc.Driver
    8. type: com.alibaba.druid.pool.DruidDataSource
    9. #初始化时运行sql脚本
    10. schema: classpath:sql/schema.sql
    11. initialization-mode: never
    12. application:
    13. name: alibaba-order-seata
    14. #设置mybatis
    15. mybatis:
    16. mapper-locations: classpath:com/xx/order/mapper/*Mapper.xml
    17. #config-location: classpath:mybatis-config.xml
    18. typeAliasesPackage: com.xx.order.pojo
    19. configuration:
    20. mapUnderscoreToCamelCase: true
    21. server:
    22. port: 8072
    23. feign:
    24. client:
    25. config:
    26. default:
    27. readTimeout: 10000
    28. connectTimeout: 10000

    bootstrap.yml(nacos配置要放到这里面)

    1. # 数据源
    2. spring:
    3. cloud:
    4. nacos:
    5. server-addr: 192.168.56.211:31000
    6. discovery:
    7. server-addr: 192.168.56.211:31000
    8. password: nacos
    9. username: nacos
    10. enabled: true
    11. seata:
    12. tx-service-group: zq
    13. registry:
    14. nacos:
    15. group: SEATA_GROUP
    16. username: nacos
    17. password: nacos
    18. application: seata-server
    19. # server-addr: 192.168.56.211:31000
    20. server-addr: 192.168.56.211:31000
    21. cluster: default
    22. type: nacos
    23. config:
    24. type: nacos
    25. nacos:
    26. password: nacos
    27. username: nacos
    28. # server-addr: 192.168.56.211:31000
    29. server-addr: 192.168.56.211:31000
    30. group: SEATA_GROUP
    31. service:
    32. grouplist:
    33. default: 192.168.56.211:31113
    34. disable-global-transaction: false
    35. # vgroup-mapping:
    36. # default_tx_group: default
    37. vgroup-mapping:
    38. default_tx_group: zq

    注意这个配置:

    大功告成了!

    你们可以去试试了!

     本机启动日志:

    1. . ____ _ __ _ _
    2. /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
    3. ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
    4. \\/ ___)| |_)| | | | | || (_| | ) ) ) )
    5. ' |____| .__|_| |_|_| |_\__, | / / / /
    6. =========|_|==============|___/=/_/_/_/
    7. :: Spring Boot :: (v2.3.12.RELEASE)
    8. 2022-09-06 17:00:48.843 INFO 132 --- [ main] c.t.order.AlibabaOrderSeataApplication : No active profile set, falling back to default profiles: default
    9. 2022-09-06 17:00:49.895 INFO 132 --- [ main] o.s.cloud.context.scope.GenericScope : BeanFactory id=f0658dad-eccb-3a82-9f89-e373def31818
    10. 2022-09-06 17:00:49.901 INFO 132 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'io.seata.spring.boot.autoconfigure.SeataCoreAutoConfiguration' of type [io.seata.spring.boot.autoconfigure.SeataCoreAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
    11. 2022-09-06 17:00:49.902 INFO 132 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'springApplicationContextProvider' of type [io.seata.spring.boot.autoconfigure.provider.SpringApplicationContextProvider] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
    12. 2022-09-06 17:00:49.903 INFO 132 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'io.seata.spring.boot.autoconfigure.SeataAutoConfiguration' of type [io.seata.spring.boot.autoconfigure.SeataAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
    13. 2022-09-06 17:00:49.958 INFO 132 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'failureHandler' of type [io.seata.tm.api.DefaultFailureHandlerImpl] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
    14. 2022-09-06 17:00:49.974 INFO 132 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'springCloudAlibabaConfiguration' of type [io.seata.spring.boot.autoconfigure.properties.SpringCloudAlibabaConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
    15. 2022-09-06 17:00:49.978 INFO 132 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'seataProperties' of type [io.seata.spring.boot.autoconfigure.properties.SeataProperties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
    16. 2022-09-06 17:00:49.981 INFO 132 --- [ main] i.s.s.b.a.SeataAutoConfiguration : Automatically configure Seata
    17. 2022-09-06 17:00:50.061 INFO 132 --- [ main] io.seata.config.ConfigurationFactory : load Configuration from :Spring Configuration
    18. 2022-09-06 17:00:50.076 INFO 132 --- [ main] i.seata.config.nacos.NacosConfiguration : Nacos check auth with userName/password.
    19. 2022-09-06 17:00:50.128 INFO 132 --- [ main] c.a.n.p.a.s.c.ClientAuthPluginManager : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.impl.NacosClientAuthServiceImpl success.
    20. 2022-09-06 17:00:50.128 INFO 132 --- [ main] c.a.n.p.a.s.c.ClientAuthPluginManager : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.ram.RamClientAuthServiceImpl success.
    21. 2022-09-06 17:00:53.548 INFO 132 --- [ main] i.s.s.a.GlobalTransactionScanner : Initializing Global Transaction Clients ...
    22. 2022-09-06 17:00:53.629 INFO 132 --- [ main] i.s.core.rpc.netty.NettyClientBootstrap : NettyClientBootstrap has started
    23. 2022-09-06 17:00:53.652 INFO 132 --- [ main] c.a.n.p.a.s.c.ClientAuthPluginManager : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.impl.NacosClientAuthServiceImpl success.
    24. 2022-09-06 17:00:53.652 INFO 132 --- [ main] c.a.n.p.a.s.c.ClientAuthPluginManager : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.ram.RamClientAuthServiceImpl success.
    25. 2022-09-06 17:00:53.987 INFO 132 --- [ main] i.s.c.r.netty.NettyClientChannelManager : will connect to 192.168.56.211:31113
    26. 2022-09-06 17:00:54.530 INFO 132 --- [ main] i.s.core.rpc.netty.NettyPoolableFactory : NettyPool create channel to transactionRole:TMROLE,address:192.168.56.211:31113,msg:< RegisterTMRequest{applicationId='alibaba-order-seata', transactionServiceGroup='zq'} >
    27. 2022-09-06 17:00:55.836 INFO 132 --- [ main] i.s.c.rpc.netty.TmNettyRemotingClient : register TM success. client version:1.5.1, server version:1.5.1,channel:[id: 0xccc3ed55, L:/192.168.56.1:56381 - R:/192.168.56.211:31113]
    28. 2022-09-06 17:00:55.845 INFO 132 --- [ main] i.s.core.rpc.netty.NettyPoolableFactory : register success, cost 69 ms, version:1.5.1,role:TMROLE,channel:[id: 0xccc3ed55, L:/192.168.56.1:56381 - R:/192.168.56.211:31113]
    29. 2022-09-06 17:00:55.846 INFO 132 --- [ main] i.s.s.a.GlobalTransactionScanner : Transaction Manager Client is initialized. applicationId[alibaba-order-seata] txServiceGroup[zq]
    30. 2022-09-06 17:00:55.864 INFO 132 --- [ main] io.seata.rm.datasource.AsyncWorker : Async Commit Buffer Limit: 10000
    31. 2022-09-06 17:00:55.865 INFO 132 --- [ main] i.s.rm.datasource.xa.ResourceManagerXA : ResourceManagerXA init ...
    32. 2022-09-06 17:00:55.874 INFO 132 --- [ main] i.s.core.rpc.netty.NettyClientBootstrap : NettyClientBootstrap has started
    33. 2022-09-06 17:00:55.874 INFO 132 --- [ main] i.s.s.a.GlobalTransactionScanner : Resource Manager is initialized. applicationId[alibaba-order-seata] txServiceGroup[zq]
    34. 2022-09-06 17:00:55.874 INFO 132 --- [ main] i.s.s.a.GlobalTransactionScanner : Global Transaction Clients are initialized.
    35. 2022-09-06 17:00:55.877 INFO 132 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'io.seata.spring.boot.autoconfigure.SeataDataSourceAutoConfiguration' of type [io.seata.spring.boot.autoconfigure.SeataDataSourceAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
    36. 2022-09-06 17:00:56.006 INFO 132 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'com.alibaba.cloud.seata.feign.SeataFeignClientAutoConfiguration$FeignBeanPostProcessorConfiguration' of type [com.alibaba.cloud.seata.feign.SeataFeignClientAutoConfiguration$FeignBeanPostProcessorConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
    37. 2022-09-06 17:00:56.010 INFO 132 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'seataFeignObjectWrapper' of type [com.alibaba.cloud.seata.feign.SeataFeignObjectWrapper] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
    38. 2022-09-06 17:00:56.369 INFO 132 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8072 (http)
    39. 2022-09-06 17:00:56.381 INFO 132 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
    40. 2022-09-06 17:00:56.381 INFO 132 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.46]
    41. 2022-09-06 17:00:56.612 INFO 132 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
    42. 2022-09-06 17:00:56.612 INFO 132 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 7748 ms
    43. 2022-09-06 17:00:56.854 INFO 132 --- [ main] c.a.d.s.b.a.DruidDataSourceAutoConfigure : Init DruidDataSource
    44. Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
    45. 2022-09-06 17:00:56.991 INFO 132 --- [ main] com.alibaba.druid.pool.DruidDataSource : {dataSource-1} inited
    46. 2022-09-06 17:00:57.284 INFO 132 --- [ main] i.s.c.r.netty.NettyClientChannelManager : will connect to 192.168.56.211:31113
    47. 2022-09-06 17:00:57.284 INFO 132 --- [ main] i.s.c.rpc.netty.RmNettyRemotingClient : RM will register :jdbc:mysql://192.168.56.213:31306/seata_order
    48. 2022-09-06 17:00:57.285 INFO 132 --- [ main] i.s.core.rpc.netty.NettyPoolableFactory : NettyPool create channel to transactionRole:RMROLE,address:192.168.56.211:31113,msg:< RegisterRMRequest{resourceIds='jdbc:mysql://192.168.56.213:31306/seata_order', applicationId='alibaba-order-seata', transactionServiceGroup='zq'} >
    49. 2022-09-06 17:00:57.300 INFO 132 --- [ main] i.s.c.rpc.netty.RmNettyRemotingClient : register RM success. client version:1.5.1, server version:1.5.1,channel:[id: 0xe1b055dc, L:/192.168.56.1:56386 - R:/192.168.56.211:31113]
    50. 2022-09-06 17:00:57.301 INFO 132 --- [ main] i.s.core.rpc.netty.NettyPoolableFactory : register success, cost 10 ms, version:1.5.1,role:RMROLE,channel:[id: 0xe1b055dc, L:/192.168.56.1:56386 - R:/192.168.56.211:31113]
    51. 2022-09-06 17:00:57.745 INFO 132 --- [ main] o.s.c.openfeign.FeignClientFactoryBean : For 'alibaba-stock-seata' URL not provided. Will try picking an instance via load-balancing.
    52. 2022-09-06 17:00:57.847 INFO 132 --- [ main] i.s.s.a.GlobalTransactionScanner : Bean[com.xx.order.service.impl.OrderServiceImpl] with name [orderServiceImpl] would use interceptor [io.seata.spring.annotation.GlobalTransactionalInterceptor]
    53. 2022-09-06 17:00:57.933 WARN 132 --- [ main] c.n.c.sources.URLConfigurationSource : No URLs will be polled as dynamic configuration sources.
    54. 2022-09-06 17:00:57.933 INFO 132 --- [ main] c.n.c.sources.URLConfigurationSource : To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
    55. 2022-09-06 17:00:57.944 WARN 132 --- [ main] c.n.c.sources.URLConfigurationSource : No URLs will be polled as dynamic configuration sources.
    56. 2022-09-06 17:00:57.944 INFO 132 --- [ main] c.n.c.sources.URLConfigurationSource : To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
    57. 2022-09-06 17:00:58.110 INFO 132 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
    58. 2022-09-06 17:00:58.951 INFO 132 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 'Nacos-Watch-Task-Scheduler'
    59. 2022-09-06 17:00:59.958 INFO 132 --- [ main] c.a.n.p.a.s.c.ClientAuthPluginManager : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.impl.NacosClientAuthServiceImpl success.
    60. 2022-09-06 17:00:59.958 INFO 132 --- [ main] c.a.n.p.a.s.c.ClientAuthPluginManager : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.ram.RamClientAuthServiceImpl success.
    61. 2022-09-06 17:01:00.268 INFO 132 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8072 (http) with context path ''
    62. 2022-09-06 17:01:00.287 INFO 132 --- [ main] c.a.c.n.registry.NacosServiceRegistry : nacos registry, DEFAULT_GROUP alibaba-order-seata 172.20.10.3:8072 register finished
    63. 2022-09-06 17:01:01.047 INFO 132 --- [ main] c.t.order.AlibabaOrderSeataApplication : Started AlibabaOrderSeataApplication in 14.933 seconds (JVM running for 16.759)
    64. 2022-09-06 17:01:13.060 INFO 132 --- [nio-8072-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
    65. 2022-09-06 17:01:13.060 INFO 132 --- [nio-8072-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
    66. 2022-09-06 17:01:13.071 INFO 132 --- [nio-8072-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 11 ms
    67. 2022-09-06 17:01:13.142 INFO 132 --- [nio-8072-exec-1] io.seata.tm.TransactionManagerHolder : TransactionManager Singleton io.seata.tm.DefaultTransactionManager@30a3de9b
    68. 2022-09-06 17:01:13.160 INFO 132 --- [nio-8072-exec-1] i.seata.tm.api.DefaultGlobalTransaction : Begin new global transaction [192.168.56.211:31113:3657226080688205940]
    69. assssssssssssssss
    70. 2022-09-06 17:01:14.196 INFO 132 --- [nio-8072-exec-1] c.netflix.config.ChainedDynamicProperty : Flipping property: alibaba-stock-seata.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
    71. 2022-09-06 17:01:14.231 INFO 132 --- [nio-8072-exec-1] c.netflix.loadbalancer.BaseLoadBalancer : Client: alibaba-stock-seata instantiated a LoadBalancer: DynamicServerListLoadBalancer:{NFLoadBalancer:name=alibaba-stock-seata,current list of Servers=[],Load balancer stats=Zone stats: {},Server stats: []}ServerList:null
    72. 2022-09-06 17:01:14.240 INFO 132 --- [nio-8072-exec-1] c.n.l.DynamicServerListLoadBalancer : Using serverListUpdater PollingServerListUpdater
    73. 2022-09-06 17:01:14.336 INFO 132 --- [nio-8072-exec-1] c.netflix.config.ChainedDynamicProperty : Flipping property: alibaba-stock-seata.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
    74. 2022-09-06 17:01:14.338 INFO 132 --- [nio-8072-exec-1] c.n.l.DynamicServerListLoadBalancer : DynamicServerListLoadBalancer for client alibaba-stock-seata initialized: DynamicServerListLoadBalancer:{NFLoadBalancer:name=alibaba-stock-seata,current list of Servers=[172.20.10.3:8073],Load balancer stats=Zone stats: {unknown=[Zone:unknown; Instance count:1; Active connections count: 0; Circuit breaker tripped count: 0; Active connections per server: 0.0;]
    75. },Server stats: [[Server:172.20.10.3:8073; Zone:UNKNOWN; Total Requests:0; Successive connection failure:0; Total blackout seconds:0; Last connection made:Thu Jan 01 08:00:00 CST 1970; First connection made: Thu Jan 01 08:00:00 CST 1970; Active Connections:0; total failure count in last (1000) msecs:0; average resp time:0.0; 90 percentile resp time:0.0; 95 percentile resp time:0.0; min resp time:0.0; max resp time:0.0; stddev resp time:0.0]
    76. ]}ServerList:com.alibaba.cloud.nacos.ribbon.NacosServerList@64034cde
    77. 2022-09-06 17:01:15.252 INFO 132 --- [erListUpdater-0] c.netflix.config.ChainedDynamicProperty : Flipping property: alibaba-stock-seata.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
    78. 2022-09-06 17:01:15.348 INFO 132 --- [nio-8072-exec-1] i.seata.tm.api.DefaultGlobalTransaction : Suspending current transaction, xid = 192.168.56.211:31113:3657226080688205940
    79. 2022-09-06 17:01:15.348 INFO 132 --- [nio-8072-exec-1] i.seata.tm.api.DefaultGlobalTransaction : [192.168.56.211:31113:3657226080688205940] commit status: Committed
    80. 2022-09-06 17:01:16.086 INFO 132 --- [ch_RMROLE_1_1_8] i.s.c.r.p.c.RmBranchCommitProcessor : rm client handle branch commit process:xid=192.168.56.211:31113:3657226080688205940,branchId=3657226080688205942,branchType=AT,resourceId=jdbc:mysql://192.168.56.213:31306/seata_order,applicationData={"skipCheckLock":true}
    81. 2022-09-06 17:01:16.101 INFO 132 --- [ch_RMROLE_1_1_8] io.seata.rm.AbstractRMHandler : Branch committing: 192.168.56.211:31113:3657226080688205940 3657226080688205942 jdbc:mysql://192.168.56.213:31306/seata_order {"skipCheckLock":true}
    82. 2022-09-06 17:01:16.102 INFO 132 --- [ch_RMROLE_1_1_8] io.seata.rm.AbstractRMHandler : Branch commit result: PhaseTwo_Committed
    83. 2022-09-06 17:17:57.315 WARN 132 --- [MetaChecker_1_1] c.a.druid.pool.DruidAbstractDataSource : discard long time none received connection. , jdbcUrl : jdbc:mysql://192.168.56.213:31306/seata_order?characterEncoding=utf8&useSSL=false&serverTimezone=UTC, version : 1.2.3, lastPacketReceivedIdleMillis : 119997

    结尾:已经把helm的部署给解决了。

    部署方式:

    或者是查看:helm-charthttps://heidaodageshiwo.github.io/helm-chart/

    2.1、 Seata1.5.1安装

    添加仓库
    [root@master ~]# helm repo ls
    NAME    URL                                         
    myrepo  https://heidaodageshiwo.github.io/helm-chart
    [root@master ~]# 
    
    更新仓库  pull helm包或者是直接去github下载即可
    [root@master ~]#  helm search  repo myrepo
    NAME                    CHART VERSION   APP VERSION     DESCRIPTION                                       
    myrepo/helloworld       0.1.0           1.16.0          A Helm chart for Kubernetes                       
    myrepo/nacos            0.1.5           1.0             A Helm chart for Kubernetes                       
    myrepo/nginx            13.1.7          1.23.1          NGINX Open Source is a web server that can be a...
    myrepo/seata-server     1.0.0           1.0             Seata Server  
    
    
    [root@master ~]# mkdir seatahelmtest
    [root@master ~]# cd seatahelmtest/
    [root@master seatahelmtest]# ll
    总用量 0
    [root@master seatahelmtest]# helm pull myrepo/seata-server
    [root@master seatahelmtest]# ls
    seata-server-1.0.0.tgz
    [root@master seatahelmtest]# tar -zxvf seata-server-1.0.0.tgz 
    seata-server/Chart.yaml
    seata-server/values.yaml
    seata-server/templates/NOTES.txt
    seata-server/templates/_helpers.tpl
    seata-server/templates/deployment.yaml
    seata-server/templates/service.yaml
    seata-server/templates/tests/test-connection.yaml
    seata-server/.helmignore
    seata-server/node.yaml
    seata-server/servicesss.yaml
    [root@master seatahelmtest]# ls
    seata-server  seata-server-1.0.0.tgz
    [root@master seatahelmtest]# ll
    总用量 4
    drwxr-xr-x 3 root root  119 9月   7 16:37 seata-server
    -rw-r--r-- 1 root root 2636 9月   7 16:37 seata-server-1.0.0.tgz
    [root@master seatahelmtest]# ls
    seata-server  seata-server-1.0.0.tgz
    [root@master seatahelmtest]# helm install seata ./seata-server
    NAME: seata
    LAST DEPLOYED: Wed Sep  7 16:39:17 2022
    NAMESPACE: default
    STATUS: deployed
    REVISION: 1
    NOTES:
    1. Get the application URL by running these commands:
      export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services seata-seata-server)
      export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
      echo http://$NODE_IP:$NODE_PORT
    [root@master seatahelmtest]# 
    

    2.2、 在安装之前需要注意几点(如果说你们按照官网的部署方式能够把8091端口代理出来也不用参考我这个了)

    1.首先把代理的配置文件给挂载出来。我是用的是nfs: https://blog.csdn.net/hunheidaode/article/details/126623672  
    2.需要修改values.yaml里面的ip地址:我的是192.168.56.211
    3.我是用的是mysql5.7  mysql8的一直报错
    4.可以安装了

    界面访问 ip:31005 seata可视化界面: 

    需要注意这个ip与端口 

      地址要一样即可。 代码的连接方式在博客中:k8s helm Seata1.5.1_hunheidaode的博客-CSDN博客 我也已经提到过了。这里就不贴了。

    2.3、 测试:

    如有问题请在博客中发表评论。
    

     

     

  • 相关阅读:
    用户投稿|Cursor——软件开发行业新变革
    想看Vue文档,cn放错位置,误入xx网站...
    ubuntun系统更换清华源
    python实现SMB服务账号密码爆破功能 Metasploit 中的 smb_login
    Parallelogram law
    [附源码]计算机毕业设计springboot教学辅助系统
    DevOps 装配线如何加速流水线移动
    JS深入理解立即执行函数,js匿名函数()
    Spring 初始导读
    糖尿病新世界杂志糖尿病新世界杂志社糖尿病新世界编辑部2022年第12期目录
  • 原文地址:https://blog.csdn.net/hunheidaode/article/details/126623672