ShardingSphere是一套开源的分布式数据库中间件解决方案组成的生态圈,它由Sharding-JDBC、Sharding-Proxy和Sharding-Sidecar(计划中)这3款相互独立的产品组成。 他们均提供标准化的数据分片、分布式事务和数据库治理功能,可适用于如Java同构、异构语言、云原生等各种多样化的应用场景。详细一点的介绍直接看官网:概览 :: ShardingSphere
分库分表就是为了解决由于数据量过大而导致数据库性能(IO瓶颈,CPU瓶颈)降低的问题,将原来独立的数据库拆分成若干数据库组成 ,将数据大表拆分成若干数据表组成,使得单一数据库、单一数据表的数据量变小,从而达到提升数据库性能的目的。
分区表
垂直分库
垂直分表(将一张表拆成多张表)
水平分库
水平分表
1、pom.xml模块添加sharding-jdbc整合依赖
-
-
-
org.springframework.boot -
spring-boot-starter-web -
-
-
org.mybatis.spring.boot -
mybatis-spring-boot-starter -
2.2.2 -
-
-
org.springframework.boot -
spring-boot-starter-aop -
-
-
org.springframework.boot -
spring-boot-starter-test -
-
-
-
junit -
junit -
4.13.1 -
-
-
-
-
javax.servlet -
javax.servlet-api -
-
-
-
org.apache.commons -
commons-lang3 -
-
-
-
com.alibaba.fastjson2 -
fastjson2 -
2.0.10 -
-
-
org.projectlombok -
lombok -
1.18.8 -
-
-
mysql -
mysql-connector-java -
runtime -
-
-
-
org.springframework.boot -
spring-boot-starter-test -
test -
-
-
-
com.alibaba -
druid-spring-boot-starter -
1.2.4 -
-
-
-
-
org.apache.shardingsphere -
sharding-jdbc-core -
4.1.1 -
-
-
-
-
-
-
-
-
-
-
2、创建两个测试数据库
- create database `ry-order1`;
- create database `ry-order2`;
3、创建两个测试订单表
- -- ----------------------------
- -- 订单信息表sys_order_0
- -- ----------------------------
- drop table if exists sys_order_0;
- create table sys_order_0
- (
- order_id bigint(20) not null comment '订单ID',
- user_id bigint(64) not null comment '用户编号',
- status char(1) not null comment '状态(0交易成功 1交易失败)',
- order_no varchar(64) default null comment '订单流水',
- primary key (order_id)
- ) engine=innodb comment = '订单信息表';
-
- -- ----------------------------
- -- 订单信息表sys_order_1
- -- ----------------------------
- drop table if exists sys_order_1;
- create table sys_order_1
- (
- order_id bigint(20) not null comment '订单ID',
- user_id bigint(64) not null comment '用户编号',
- status char(1) not null comment '状态(0交易成功 1交易失败)',
- order_no varchar(64) default null comment '订单流水',
- primary key (order_id)
- ) engine=innodb comment = '订单信息表';
4、配置文件application-druid.yml添加测试数据源
application.yml文件
- server:
- # 服务器的HTTP端口,默认为80
- port: 8080
- servlet:
- # 应用的访问路径
- context-path: /
-
- # Spring配置
- spring:
- #加载application-druid.yml文件
- profiles:
- active: druid
-
- # MyBatis
- mybatis:
- # 搜索指定包别名
- typeAliasesPackage: com.common.pojo
- # 配置mapper的扫描,找到所有的mapper.xml映射文件
- mapperLocations: classpath*:mapper/**/*Mapper.xml
- # 加载全局的配置文件
- # #configLocation: classpath:mybatis/mybatis-config.xml
application-druid.yml文件
- # 数据源配置
- spring:
- datasource:
- type: com.alibaba.druid.pool.DruidDataSource
- driverClassName: com.mysql.cj.jdbc.Driver
- druid:
- # 主库数据源
- master:
- url: jdbc:mysql://localhost:3306/ry?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=true&serverTimezone=GMT%2B8
- username: root
- password: 123456
- # 订单库1
- order1:
- enabled: true
- url: jdbc:mysql://localhost:3306/ry-order1?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=true&serverTimezone=GMT%2B8
- username: root
- password: 123456
- # 订单库2
- order2:
- enabled: true
- url: jdbc:mysql://localhost:3306/ry-order2?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=true&serverTimezone=GMT%2B8
- username: root
- password: 123456
- # 初始连接数
- initialSize: 5
- # 最小连接池数量
- minIdle: 10
- # 最大连接池数量
- maxActive: 20
- # 配置获取连接等待超时的时间
- maxWait: 60000
- # 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒
- timeBetweenEvictionRunsMillis: 60000
- # 配置一个连接在池中最小生存的时间,单位是毫秒
- minEvictableIdleTimeMillis: 300000
- # 配置一个连接在池中最大生存的时间,单位是毫秒
- maxEvictableIdleTimeMillis: 900000
- # 配置检测连接是否有效
- validationQuery: SELECT 1 FROM DUAL
- testWhileIdle: true
- testOnBorrow: false
- testOnReturn: false
- webStatFilter:
- enabled: true
- statViewServlet:
- enabled: true
- # 设置白名单,不填则允许所有访问
- allow:
- url-pattern: /druid/*
- # 控制台管理用户名和密码
- login-username: admin
- login-password: 123456
- filter:
- stat:
- enabled: true
- # 慢SQL记录
- log-slow-sql: true
- slow-sql-millis: 1000
- merge-sql: true
- wall:
- config:
- multi-statement-allow: true
5、数据源切换处理
-
- import org.slf4j.Logger;
- import org.slf4j.LoggerFactory;
-
- /**
- * 数据源切换处理
- */
- public class DynamicDataSourceContextHolder {
-
- public static final Logger log = LoggerFactory.getLogger(DynamicDataSourceContextHolder.class);
-
- /**
- * 使用ThreadLocal维护变量,ThreadLocal为每个使用该变量的线程提供独立的变量副本,
- * 所以每一个线程都可以独立地改变自己的副本,而不会影响其它线程所对应的副本。
- */
- private static final ThreadLocal
CONTEXT_HOLDER = new ThreadLocal<>(); -
- /**
- * 设置数据源的变量
- */
- public static void setDataSourceType(String dsType) {
- log.info("切换到{}数据源", dsType);
- CONTEXT_HOLDER.set(dsType);
- }
-
- /**
- * 获得数据源的变量
- */
- public static String getDataSourceType() {
- return CONTEXT_HOLDER.get();
- }
-
- /**
- * 清空数据源变量
- */
- public static void clearDataSourceType() {
- CONTEXT_HOLDER.remove();
- }
- }
6、动态数据源切换
通过AbstractRoutingDataSource实现数据源动态切换
Springboot提供了AbstractRoutingDataSource 根据用户定义的规则选择当前的数据源,这样我们可以在执行查询之前,切换到需要的数据源。实现可动态路由的数据源,在每次数据库查询操作前执行。它的抽象方法 determineCurrentLookupKey() 决定使用哪个数据源。
- import com.common.utils.DynamicDataSourceContextHolder;
- import org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource;
-
- import javax.sql.DataSource;
- import java.util.Map;
-
- /**
- * 动态数据源切换
- */
- public class DynamicDataSource extends AbstractRoutingDataSource {
-
- public DynamicDataSource(DataSource defaultTargetDataSource, Map {
- super.setDefaultTargetDataSource(defaultTargetDataSource);
- super.setTargetDataSources(targetDataSources);
- super.afterPropertiesSet();
- }
-
- @Override
- protected Object determineCurrentLookupKey() {
- return DynamicDataSourceContextHolder.getDataSourceType();
- }
- }
7、自定义数据源切换注解
- import com.common.enums.DataSourceType;
-
- import java.lang.annotation.*;
-
- /**
- * 自定义多数据源切换注解
- *
- * 优先级:先方法,后类,如果方法覆盖了类上的数据源类型,以方法的为准,否则以类上的为准
- */
- @Target({ElementType.METHOD, ElementType.TYPE})
- @Retention(RetentionPolicy.RUNTIME)
- @Documented
- @Inherited
- public @interface DataSource {
- /**
- * 切换数据源名称
- */
- public DataSourceType value() default DataSourceType.MASTER;
- }
-
- /**
- * 数据源
- *
- */
- public enum DataSourceType {
- /**
- * 主库
- */
- MASTER,
-
- /**
- * 从库
- */
- SLAVE,
-
- /**
- * 分库分表
- */
- SHARDING
- }
8、druid配置属性
-
- /**
- * druid 配置属性
- */
- @Configuration
- public class DruidProperties {
-
- @Value("${spring.datasource.druid.initialSize}")
- private int initialSize;
-
- @Value("${spring.datasource.druid.minIdle}")
- private int minIdle;
-
- @Value("${spring.datasource.druid.maxActive}")
- private int maxActive;
-
- @Value("${spring.datasource.druid.maxWait}")
- private int maxWait;
-
- @Value("${spring.datasource.druid.timeBetweenEvictionRunsMillis}")
- private int timeBetweenEvictionRunsMillis;
-
- @Value("${spring.datasource.druid.minEvictableIdleTimeMillis}")
- private int minEvictableIdleTimeMillis;
-
- @Value("${spring.datasource.druid.maxEvictableIdleTimeMillis}")
- private int maxEvictableIdleTimeMillis;
-
- @Value("${spring.datasource.druid.validationQuery}")
- private String validationQuery;
-
- @Value("${spring.datasource.druid.testWhileIdle}")
- private boolean testWhileIdle;
-
- @Value("${spring.datasource.druid.testOnBorrow}")
- private boolean testOnBorrow;
-
- @Value("${spring.datasource.druid.testOnReturn}")
- private boolean testOnReturn;
-
- public DruidDataSource dataSource(DruidDataSource datasource) {
- /** 配置初始化大小、最小、最大 */
- datasource.setInitialSize(initialSize);
- datasource.setMaxActive(maxActive);
- datasource.setMinIdle(minIdle);
-
- /** 配置获取连接等待超时的时间 */
- datasource.setMaxWait(maxWait);
-
- /** 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒 */
- datasource.setTimeBetweenEvictionRunsMillis(timeBetweenEvictionRunsMillis);
-
- /** 配置一个连接在池中最小、最大生存的时间,单位是毫秒 */
- datasource.setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis);
- datasource.setMaxEvictableIdleTimeMillis(maxEvictableIdleTimeMillis);
-
- /**
- * 用来检测连接是否有效的sql,要求是一个查询语句,常用select 'x'。如果validationQuery为null,testOnBorrow、testOnReturn、testWhileIdle都不会起作用。
- */
- datasource.setValidationQuery(validationQuery);
- /** 建议配置为true,不影响性能,并且保证安全性。申请连接的时候检测,如果空闲时间大于timeBetweenEvictionRunsMillis,执行validationQuery检测连接是否有效。 */
- datasource.setTestWhileIdle(testWhileIdle);
- /** 申请连接时执行validationQuery检测连接是否有效,做了这个配置会降低性能。 */
- datasource.setTestOnBorrow(testOnBorrow);
- /** 归还连接时执行validationQuery检测连接是否有效,做了这个配置会降低性能。 */
- datasource.setTestOnReturn(testOnReturn);
- return datasource;
- }
- }
9、多数据源配置
- import com.alibaba.druid.pool.DruidDataSource;
- import com.alibaba.druid.spring.boot.autoconfigure.DruidDataSourceBuilder;
- import com.alibaba.druid.spring.boot.autoconfigure.properties.DruidStatProperties;
- import com.alibaba.druid.util.Utils;
- import com.common.config.properties.DruidProperties;
- import com.common.datasource.DynamicDataSource;
- import com.common.enums.DataSourceType;
- import com.common.utils.SpringUtils;
- import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
- import org.springframework.boot.context.properties.ConfigurationProperties;
- import org.springframework.boot.web.servlet.FilterRegistrationBean;
- import org.springframework.context.annotation.Bean;
- import org.springframework.context.annotation.Configuration;
- import org.springframework.context.annotation.Primary;
-
- import javax.servlet.*;
- import javax.sql.DataSource;
- import java.io.IOException;
- import java.util.HashMap;
- import java.util.Map;
-
- /**
- * druid 配置多数据源
- *
- */
- @Configuration
- public class DruidConfig {
-
- @Bean
- @ConfigurationProperties("spring.datasource.druid.master")
- public DataSource masterDataSource(DruidProperties druidProperties) {
- DruidDataSource dataSource = DruidDataSourceBuilder.create().build();
- return druidProperties.dataSource(dataSource);
- }
-
- @Bean
- @ConfigurationProperties("spring.datasource.druid.slave")
- @ConditionalOnProperty(prefix = "spring.datasource.druid.slave", name = "enabled", havingValue = "true")
- public DataSource slaveDataSource(DruidProperties druidProperties) {
- DruidDataSource dataSource = DruidDataSourceBuilder.create().build();
- return druidProperties.dataSource(dataSource);
- }
-
- @Bean(name = "dynamicDataSource")
- @Primary
- public DynamicDataSource dataSource(DataSource masterDataSource) {
- Map
- targetDataSources.put(DataSourceType.MASTER.name(), masterDataSource);
- setDataSource(targetDataSources, DataSourceType.SLAVE.name(), "slaveDataSource");
- setDataSource(targetDataSources, DataSourceType.SHARDING.name(), "shardingDataSource");
- return new DynamicDataSource(masterDataSource, targetDataSources);
- }
-
- /**
- * 设置数据源
- *
- * @param targetDataSources 备选数据源集合
- * @param sourceName 数据源名称
- * @param beanName bean名称
- */
- public void setDataSource(Map {
- try {
- DataSource dataSource = SpringUtils.getBean(beanName);
- targetDataSources.put(sourceName, dataSource);
- } catch (Exception e) {
- }
- }
-
- /**
- * 去除监控页面底部的广告
- */
- @SuppressWarnings({"rawtypes", "unchecked"})
- @Bean
- @ConditionalOnProperty(name = "spring.datasource.druid.statViewServlet.enabled", havingValue = "true")
- public FilterRegistrationBean removeDruidFilterRegistrationBean(DruidStatProperties properties) {
- // 获取web监控页面的参数
- DruidStatProperties.StatViewServlet config = properties.getStatViewServlet();
- // 提取common.js的配置路径
- String pattern = config.getUrlPattern() != null ? config.getUrlPattern() : "/druid/*";
- String commonJsPattern = pattern.replaceAll("\\*", "js/common.js");
- final String filePath = "support/http/resources/js/common.js";
- // 创建filter进行过滤
- Filter filter = new Filter() {
- @Override
- public void init(javax.servlet.FilterConfig filterConfig) throws ServletException {
- }
-
- @Override
- public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
- throws IOException, ServletException {
- chain.doFilter(request, response);
- // 重置缓冲区,响应头不会被重置
- response.resetBuffer();
- // 获取common.js
- String text = Utils.readFromResource(filePath);
- // 正则替换banner, 除去底部的广告信息
- text = text.replaceAll("
" , ""); - text = text.replaceAll("powered.*?shrek.wang", "");
- response.getWriter().write(text);
- }
-
- @Override
- public void destroy() {
- }
- };
- FilterRegistrationBean registrationBean = new FilterRegistrationBean();
- registrationBean.setFilter(filter);
- registrationBean.addUrlPatterns(commonJsPattern);
- return registrationBean;
- }
- }
10、分库分表配置
-
- import com.alibaba.druid.pool.DruidDataSource;
- import com.alibaba.druid.spring.boot.autoconfigure.DruidDataSourceBuilder;
- import com.common.config.properties.DruidProperties;
- import com.common.sharding.ShardingAlgorithm;
- import org.apache.shardingsphere.api.config.sharding.KeyGeneratorConfiguration;
- import org.apache.shardingsphere.api.config.sharding.ShardingRuleConfiguration;
- import org.apache.shardingsphere.api.config.sharding.TableRuleConfiguration;
- import org.apache.shardingsphere.api.config.sharding.strategy.InlineShardingStrategyConfiguration;
- import org.apache.shardingsphere.api.config.sharding.strategy.StandardShardingStrategyConfiguration;
- import org.apache.shardingsphere.shardingjdbc.api.ShardingDataSourceFactory;
- import org.springframework.beans.factory.annotation.Qualifier;
- import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
- import org.springframework.boot.context.properties.ConfigurationProperties;
- import org.springframework.context.annotation.Bean;
- import org.springframework.context.annotation.Configuration;
-
- import javax.sql.DataSource;
- import java.sql.SQLException;
- import java.util.HashMap;
- import java.util.Map;
- import java.util.Properties;
-
- /**
- * sharding 配置信息
- */
- @Configuration
- public class ShardingDataSourceConfig {
-
-
- @Bean
- @ConfigurationProperties("spring.datasource.druid.order1")
- @ConditionalOnProperty(prefix = "spring.datasource.druid.order1", name = "enabled", havingValue = "true")
- public DataSource order1DataSource(DruidProperties druidProperties) {
- DruidDataSource dataSource = DruidDataSourceBuilder.create().build();
- return druidProperties.dataSource(dataSource);
- }
-
- @Bean
- @ConfigurationProperties("spring.datasource.druid.order2")
- @ConditionalOnProperty(prefix = "spring.datasource.druid.order2", name = "enabled", havingValue = "true")
- public DataSource order2DataSource(DruidProperties druidProperties) {
- DruidDataSource dataSource = DruidDataSourceBuilder.create().build();
- return druidProperties.dataSource(dataSource);
- }
-
- // @Bean(name = "shardingDataSource")
- // public DataSource shardingDataSource(@Qualifier("order1DataSource") DataSource order1DataSource, @Qualifier("order2DataSource") DataSource order2DataSource) throws SQLException {
- // Map
dataSourceMap = new HashMap<>(); - // dataSourceMap.put("order1", order1DataSource);
- // dataSourceMap.put("order2", order2DataSource);
- //
- // // sys_order 表规则配置 (Groovy表达式)
- // TableRuleConfiguration orderTableRuleConfig = new TableRuleConfiguration("sys_order", "order$->{1..2}.sys_order_$->{0..1}");
- //
- // // 配置分库策略
- // orderTableRuleConfig.setDatabaseShardingStrategyConfig(new InlineShardingStrategyConfiguration("user_id", "order$->{user_id % 2 + 1}"));
- // // 配置分表策略
- // orderTableRuleConfig.setTableShardingStrategyConfig(new InlineShardingStrategyConfiguration("order_id", "sys_order_$->{order_id % 2}"));
- // // 分布式主键
- // orderTableRuleConfig.setKeyGeneratorConfig(new KeyGeneratorConfiguration("SNOWFLAKE", "order_id"));
- //
- // // 配置分片规则
- // ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration();
- // shardingRuleConfig.getTableRuleConfigs().add(orderTableRuleConfig);
- //
- // // 数据库分片策略
- // shardingRuleConfig.setDefaultDatabaseShardingStrategyConfig(new StandardShardingStrategyConfiguration("user_id", new ShardingAlgorithm()));
- //
- // //系统参数配置
- // Properties shardingProperties = new Properties();
- // shardingProperties.put("sql.show", true);
- //
- // // 获取数据源对象
- // DataSource dataSource = ShardingDataSourceFactory.createDataSource(dataSourceMap, shardingRuleConfig, shardingProperties);
- // return dataSource;
- // }
-
- @Bean(name = "shardingDataSource")
- public DataSource shardingDataSource(@Qualifier("order1DataSource") DataSource order1DataSource, @Qualifier("order2DataSource") DataSource order2DataSource) throws SQLException {
-
- Map
dataSourceMap = new HashMap<>(); - dataSourceMap.put("order1", order1DataSource);
- dataSourceMap.put("order2", order2DataSource);
-
- //表规则配置
- TableRuleConfiguration tableRuleConfiguration = new TableRuleConfiguration("sys_order", "order$->{1..2}.sys_order_$->{0..1}");
- //分库
- tableRuleConfiguration.setDatabaseShardingStrategyConfig(new InlineShardingStrategyConfiguration("user_id", "order$->{user_id % 2 + 1}"));
- // 配置分表策略
- //orderTableRuleConfig.setTableShardingStrategyConfig(new InlineShardingStrategyConfiguration("order_id", "sys_order_$->{order_id % 2}"));
- //自定义分表规则
- tableRuleConfiguration.setTableShardingStrategyConfig(new StandardShardingStrategyConfiguration("order_id", new ShardingAlgorithm()));
- // 分布式主键
- tableRuleConfiguration.setKeyGeneratorConfig(new KeyGeneratorConfiguration("SNOWFLAKE", "order_id"));
- //设置表复杂sharding规则
- // TableRuleConfiguration orderInfoTableRule = new TableRuleConfiguration("order_info");
- // ComplexShardingStrategyConfiguration complexShardingStrategyConfiguration =
- // new ComplexShardingStrategyConfiguration("order_no,merchant_id",
- // new ComplexDatabaseShardingAlgorithm());
- // orderInfoTableRule.setDatabaseShardingStrategyConfig(complexShardingStrategyConfiguration);
-
- ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration();
- shardingRuleConfig.getTableRuleConfigs().add(tableRuleConfiguration);
- // shardingRuleConfig.getTableRuleConfigs().add(orderInfoTableRule);
- //系统参数配置
- Properties properties = new Properties();
- properties.put("sql.show", true);
-
- return ShardingDataSourceFactory.createDataSource(dataSourceMap, shardingRuleConfig, properties);
- }
- }
11、分库分表测试
业务处理层
-
- import com.common.annotation.DataSource;
- import com.common.enums.DataSourceType;
- import com.common.mapper.SysOrderMapper;
- import com.common.pojo.SysOrder;
- import com.common.service.ISysOrderService;
- import org.springframework.beans.factory.annotation.Autowired;
- import org.springframework.stereotype.Service;
-
- import java.util.List;
-
- /**
- * 订单Service业务层处理
- */
- @Service
- public class SysOrderServiceImpl implements ISysOrderService {
-
- @Autowired
- private SysOrderMapper myShardingMapper;
-
- /**
- * 查询订单
- *
- * @param orderId 订单编号
- * @return 订单信息
- */
- @Override
- @DataSource(DataSourceType.SHARDING)
- public SysOrder selectSysOrderById(Long orderId) {
- return myShardingMapper.selectSysOrderById(orderId);
- }
-
- /**
- * 查询订单列表
- *
- * @param sysOrder 订单信息
- * @return 订单列表
- */
- @Override
- @DataSource(DataSourceType.SHARDING)
- public List
selectSysOrderList(SysOrder sysOrder) { - return myShardingMapper.selectSysOrderList(sysOrder);
- }
-
- /**
- * 新增订单
- *
- * @param sysOrder 订单
- * @return 结果
- */
- @Override
- @DataSource(DataSourceType.SHARDING)
- public int insertSysOrder(SysOrder sysOrder) {
- return myShardingMapper.insertSysOrder(sysOrder);
- }
- }
controller
-
- import com.common.pojo.SysOrder;
- import com.common.service.ISysOrderService;
- import com.common.utils.ResponseUtils;
- import org.springframework.beans.factory.annotation.Autowired;
- import org.springframework.web.bind.annotation.GetMapping;
- import org.springframework.web.bind.annotation.PathVariable;
- import org.springframework.web.bind.annotation.RequestMapping;
- import org.springframework.web.bind.annotation.RestController;
-
- import java.util.UUID;
-
- /**
- * 订单 Controller
- */
- @RestController
- @RequestMapping("/order")
- public class SysOrderController {
-
-
- @Autowired
- private ISysOrderService sysOrderService;
-
- @GetMapping("/add/{userId}")
- public ResponseUtils add(@PathVariable("userId") Long userId) {
- SysOrder sysOrder = new SysOrder();
- sysOrder.setUserId(userId);
- sysOrder.setStatus("0");
- sysOrder.setOrderNo(UUID.randomUUID().toString());
- return ResponseUtils.success(sysOrderService.insertSysOrder(sysOrder));
- }
-
- @GetMapping("/list")
- public ResponseUtils list() {
- return ResponseUtils.success(sysOrderService.selectSysOrderList(new SysOrder()));
- }
-
- @GetMapping("/query/{orderId}")
- public ResponseUtils query(@PathVariable("orderId") Long orderId) {
- return ResponseUtils.success(sysOrderService.selectSysOrderById(orderId));
- }
- }
测试验证
访问http://localhost:8080/order/add/1入库到ry-order2
访问http://localhost:8080/order/add/2入库到ry-order1
同时根据订单号order_id % 2入库到sys_order_0或者sys_order_1

11、源码下载