之前研究seata的时候,最新版本是1.4.2,最近到官网一看升级到1.5.2,于是便想搭建研究一下。
我在安装部署前github 上瞄了一下seata 的源码,发现
1.4.2 启动配置文件与1.5.2 不一样。
1.4.2使用到了file.conf和registry.conf
而 1.5.2 只用到了application.yml 一个配置文件
为了获取seata 的启动配置文件,我先启动一下启动一下seata 。
docker run -d --name seata-server-1.5.2 -p 8091:8091 -p 7091:7091 seataio/seata-server:1.5.2
这时进入seata 容器内部查看配置
[root@iZuf68o84giy66jqoyhhzzZ ~]# docker exec -it seata-server-1.5.2 sh
/seata-server # ls
classes libs resources
/seata-server # cd resources/
/seata-server/resources # ls
META-INF README.md application.yml io logback-spring.xml
README-zh.md application.example.yml banner.txt logback lua
/seata-server/resources # pwd
/seata-server/resources
/seata-server/resources # exit;
[root@iZuf68o84giy66jqoyhhzzZ ~]#
可以看到启动配置文件在/seata-server/resources 下,退出容器,将容器中的配置文件目录拷贝出来。
先在宿主机中创建一个存放配置文件的目录
[root@iZuf60p2g1civqxx94w33bZ ~]# mkdir -p /data/seata1.5.2/
拷贝容器中配置信息
[root@iZuf68o84giy66jqoyhhzzZ seata1.5.2]# docker cp seata-server-1.5.2:/seata-server/resources /data/seata1.5.2/
[root@iZuf68o84giy66jqoyhhzzZ seata]# cd /data/seata1.5.2/
[root@iZuf68o84giy66jqoyhhzzZ seata1.5.2]# ll
total 4
drwxr-xr-x 6 root root 4096 Aug 1 17:32 resources
[root@iZuf68o84giy66jqoyhhzzZ seata1.5.2]# ll resources/
total 44
-rw-r--r-- 1 root root 4471 Jan 1 1970 application.example.yml
-rw-r--r-- 1 root root 4028 Aug 1 17:31 application.yml
-rw-r--r-- 1 root root 664 Jan 1 1970 banner.txt
drwxr-xr-x 3 root root 4096 Jan 1 1970 io
drwxr-xr-x 2 root root 4096 Jan 1 1970 logback
-rw-r--r-- 1 root root 2602 Jan 1 1970 logback-spring.xml
drwxr-xr-x 3 root root 4096 Jan 1 1970 lua
drwxr-xr-x 3 root root 4096 Jan 1 1970 META-INF
-rw-r--r-- 1 root root 1406 Jan 1 1970 README.md
-rw-r--r-- 1 root root 1400 Jan 1 1970 README-zh.md
[root@iZuf68o84giy66jqoyhhzzZ seata1.5.2]#
然后修改application.yml 配置自己的注册中心和配置文件中心即可,如果具体配置不知道怎么写可以拷贝application.example.yml 中配置然后在修改。
nacos 配置信息
seata-server.properties 配置信息
service.vgroupMapping.default-tx-group=default
store.mode=db
store.lock.mode=db
store.session.mode=db
#store.publicKey=
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.jdbc.Driver
store.db.url=jdbc:mysql://localhost:3306/seata?useUnicode=true&rewriteBatchedStatements=true
store.db.user=root
store.db.password=root
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.distributedLockTable=distributed_lock
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000
#Transaction rule configuration, only for the server
server.recovery.committingRetryPeriod=1000
server.recovery.asynCommittingRetryPeriod=1000
server.recovery.rollbackingRetryPeriod=1000
server.recovery.timeoutRetryPeriod=1000
server.maxCommitRetryTimeout=-1
server.maxRollbackRetryTimeout=-1
server.rollbackRetryTimeoutUnlockEnable=false
server.distributedLockExpireTime=10000
server.xaerNotaRetryTimeout=60000
server.session.branchAsyncQueueSize=5000
server.session.enableBranchAsyncRemove=false
#Transaction rule configuration, only for the client
client.rm.asyncCommitBufferLimit=10000
client.rm.lock.retryInterval=10
client.rm.lock.retryTimes=30
client.rm.lock.retryPolicyBranchRollbackOnConflict=true
client.rm.reportRetryCount=5
client.rm.tableMetaCheckEnable=true
client.rm.tableMetaCheckerInterval=60000
client.rm.sqlParserType=druid
client.rm.reportSuccessEnable=false
client.rm.sagaBranchRegisterEnable=false
client.rm.sagaJsonParser=fastjson
client.rm.tccActionInterceptorOrder=-2147482648
client.tm.commitRetryCount=5
client.tm.rollbackRetryCount=5
client.tm.defaultGlobalTransactionTimeout=60000
client.tm.degradeCheck=false
client.tm.degradeCheckAllowTimes=10
client.tm.degradeCheckPeriod=2000
client.tm.interceptorOrder=-2147482648
client.undo.dataValidation=true
client.undo.logSerialization=jackson
client.undo.onlyCareUpdateColumns=true
server.undo.logSaveDays=7
server.undo.logDeletePeriod=86400000
client.undo.logTable=undo_log
client.undo.compress.enable=true
client.undo.compress.type=zip
client.undo.compress.threshold=64k
#For TCC transaction mode
tcc.fence.logTableName=tcc_fence_log
tcc.fence.cleanPeriod=1h
#Log rule configuration, for client and server
log.exceptionRate=100
#Metrics configuration, only for the server
metrics.enabled=false
metrics.registryType=compact
metrics.exporterList=prometheus
metrics.exporterPrometheusPort=9898
transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.enableTmClientBatchSendRequest=false
transport.enableRmClientBatchSendRequest=true
transport.enableTcServerBatchSendResponse=false
transport.rpcRmRequestTimeout=30000
transport.rpcTmRequestTimeout=30000
transport.rpcTcRequestTimeout=30000
transport.threadFactory.bossThreadPrefix=NettyBoss
transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
transport.threadFactory.shareBossWorker=false
transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
transport.threadFactory.clientSelectorThreadSize=1
transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
transport.threadFactory.bossThreadSize=1
transport.threadFactory.workerThreadSize=default
transport.shutdown.wait=3
transport.serialization=seata
transport.compressor=none
然后停止docker 容器 并删除容器,然后在运行,将宿主机中的配置文件挂在到容器中。
docker run -d --name seata-server-1.5.2 -p 8091:8091 -p 7091:7091 -v /data/seata1.5.2/resources/:/seata-server/resources seataio/seata-server:1.5.2
如果你想将你的seata 服务部署到服务器上,可以在命令中执行seata 所在服务器的公网ip
docker run -d --name seata-server-1.5.2 -p 8091:8091 -p 7091:7091 -e SEATA_IP=公网ip -v /data/seata1.5.2/resources/:/seata-server/resources seataio/seata-server:1.5.2
查看日志:
在查看nacos 是否有注册
到这里seata1.5.2 就部署成功了。