• DPDK系列之三十六报文转发


    一、网络报文处理

    学过网络通信的都知道,其实在网络的底层数据就是一包(帧)包的。换句话说,所有的网络设备转发的其实就是一包包的二进制流数据。对设备或者驱动来说,这些数据没有什么任何意义,它们只是负责进行检验、处理、转发。说白了就像一个个的物流中转站,它只管看看包裹是否损坏,发往何地,然后扔到指定的传送带上即可。网络上的数据包也是如此。
    现实世界中,当双11时,包裹量大增,物流中心也得搞一些处理的方法,或者加人加机器或者改善流程,更或者直接升级物流设备采用机器人自动仓储。同样,对于网络世界,也是如此,它会从软件到硬件有一个完整的数据处理的流程,包括应用框架,算法以及后面要提到的硬件的处理等等。
    那么首先这里要先弄明白网络处理模块有哪些:
    1、首先得有输入输出模块,这是网络吞吐的接口即:
    Packet input: 报文输入
    Packet output: 硬件发出
    2、然后需要对报文进行处理:
    Pre-processing: 报文比较粗粒度处理
    Input classification: 报文较细粒度分流
    3、然后是数据管理和控制模块:
    Ingress queuing: 提供基于描述符的队列FIFO
    Delivery/Scheduling: 根据队列优先级和CPU状态进行调度
    Accelerator: 提供加解密和压缩/解压缩等硬件功能
    Egress queueing: 在出口上根据QOS等级进行调度
    4、完成后清扫现场
    Post processing: 后期报文处理释放缓存
    其实把这些模块按功能逻辑一划分,立刻就明白了,这比画张图还好理解。

    二、转发应用框架

    说到应该框架就要谈到转发模型,一提到模型,大家就基本可以明白了,如果没有明显的技术突破,模型基本是不会动的。所以这里用到的模型有两种:
    1、Pipleline模型(Packet Framework)
    Pipeline很好理解,计算机的CPU中使用就是这种流水模型。流水模型非常适合于一些有节奏的有规律的工作。比如对CPU密集型应用和IO密集型应用可以分别用不同的引擎来处理。在DPDK中,其可以按功能分成zoom out(多核应用框架)和zoom in(单个流水线模块)。
    在这些模块中,通过使用三部分即逻辑端口、查找表和处理逻辑单元来实现对Pipeline的报文处理。端口做为每流水单元的模块输入,而通过查找表来确定处理方法,而处理逻辑则决定了报文的处理和最终流向。这样,一层层的堆叠,就形成了一个Pipeline。
    DPDK支持的Pipeline有以下几种:
    Packet I/O
    Flow classification
    Firewall
    Routing
    Metering
    Traffic Mgmt
    这些Pipeline都可以简单的通过配置文件来使用其进行应用。但是这种模型由于流水的限制,不容易进行扩展,对多核支持的也不如RTC好。

    2、run to completion模型(RTC)
    看到这个模型,写过网络编程的小伙伴是不是想到了IOCP,完成端口,这两还真得非常类似,说白了都是为了充分挖掘多核的优势。它对于处理一些上下文逻辑关系并行的数据流则非常有优势,它可以充分使用各个核心动态的分配处理各个逻辑层,并且很容易进行扩展。
    在DPDK中可以通过参数指令将核心绑定到线程上,这样,不同的数据收发队列就可以与逻辑核心,从而保证一个报文只能在一个线程中进行处理。同时,通用的处理器单元使得编程也变得更简单。

    3、二者的比较
    通过上面分析,其实可以总结出来,对于并行度要求高但优化处理不高的报文,可以使用RTC模型;反之可以使用Pipeline模型。前者更适合于高并发的短连接后者更适合于长连接连续数据处理,方便进行优化动作。

    三、相关算法

    相关的算法就比较简单了,主要有以下几种:
    1、精确匹配算法
    从名字就可以看出来,直接就可以匹配上,精确配对。在网络中常用的就是哈希。不管你是哪种哈希,反正是哈希。应用哈希就需要解决哈希冲突的问题,常用的还是两种,链表和开放地址。这些都是老生常谈,不再赘述。
    同样在DPDK中对哈希的校验也进行了优化,对字节对齐进行了处理。然后使用不同的硬件指令一次处理相关校验或者在无法使用硬件时使用查表的方法进行,这是典型的空间换时间。
    2、最长匹配算法
    最长前缀匹配(Longest Prefix Matching, LPM)算法是指在IP协议中被路由器用于在路由表中进行选择的一个算法。这个算法也很常见,在密码学和网络中经常可以用到。一般比较常用的是LPM算法。

    3、ACL算法
    ACL算法其实就是通过访问一个控制库,利用分类规则来对输入的数据包进行处理分类。ACL 库利用N元组的匹配规则进行类型匹配,提供如下操作:
    创建AC(access domain) 的上下文
    加规则到AC的上下文中
    对于所有规则创建相关的结构体
    进行入方向报文分类
    销毁AC相关的资源

    四、报文分发

    DPDK中提供了一套报文转发的库和API,它的原理基本上就是通过distributor分发给不同的工作者Worker。而distributor则从Mbuf中拿到相关数据。这样,就形成了一个完整的分发流程。

    五、源码

    下面看一下DPDK中相关的源码:

    //dpdk-stable-19.11.14\lib\librte_eventdev
    
    ......
    #include "rte_eventdev.h"
    #include "rte_eventdev_pmd.h"
    
    static struct rte_eventdev rte_event_devices[RTE_EVENT_MAX_DEVS];
    
    struct rte_eventdev *rte_eventdevs = rte_event_devices;
    
    static struct rte_eventdev_global eventdev_globals = {
    	.nb_devs		= 0
    };
    
    /* Event dev north bound API implementation */
    
    uint8_t
    rte_event_dev_count(void)
    {
    	return eventdev_globals.nb_devs;
    }
    
    int
    rte_event_dev_get_dev_id(const char *name)
    {
    	int i;
    	uint8_t cmp;
    
    	if (!name)
    		return -EINVAL;
    
    	for (i = 0; i < eventdev_globals.nb_devs; i++) {
    		cmp = (strncmp(rte_event_devices[i].data->name, name,
    				RTE_EVENTDEV_NAME_MAX_LEN) == 0) ||
    			(rte_event_devices[i].dev ? (strncmp(
    				rte_event_devices[i].dev->driver->name, name,
    					 RTE_EVENTDEV_NAME_MAX_LEN) == 0) : 0);
    		if (cmp && (rte_event_devices[i].attached ==
    					RTE_EVENTDEV_ATTACHED))
    			return i;
    	}
    	return -ENODEV;
    }
    
    int
    rte_event_dev_socket_id(uint8_t dev_id)
    {
    	struct rte_eventdev *dev;
    
    	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
    	dev = &rte_eventdevs[dev_id];
    
    	return dev->data->socket_id;
    }
    
    int
    rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
    {
    	struct rte_eventdev *dev;
    
    	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
    	dev = &rte_eventdevs[dev_id];
    
    	if (dev_info == NULL)
    		return -EINVAL;
    
    	memset(dev_info, 0, sizeof(struct rte_event_dev_info));
    
    	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
    	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
    
    	dev_info->dequeue_timeout_ns = dev->data->dev_conf.dequeue_timeout_ns;
    
    	dev_info->dev = dev->dev;
    	return 0;
    }
    ......
    int
    rte_event_port_link(uint8_t dev_id, uint8_t port_id,
    		    const uint8_t queues[], const uint8_t priorities[],
    		    uint16_t nb_links)
    {
    	struct rte_eventdev *dev;
    	uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
    	uint8_t priorities_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
    	uint16_t *links_map;
    	int i, diag;
    
    	RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
    	dev = &rte_eventdevs[dev_id];
    
    	if (*dev->dev_ops->port_link == NULL) {
    		RTE_EDEV_LOG_ERR("Function not supported\n");
    		rte_errno = ENOTSUP;
    		return 0;
    	}
    
    	if (!is_valid_port(dev, port_id)) {
    		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
    		rte_errno = EINVAL;
    		return 0;
    	}
    
    	if (queues == NULL) {
    		for (i = 0; i < dev->data->nb_queues; i++)
    			queues_list[i] = i;
    
    		queues = queues_list;
    		nb_links = dev->data->nb_queues;
    	}
    
    	if (priorities == NULL) {
    		for (i = 0; i < nb_links; i++)
    			priorities_list[i] = RTE_EVENT_DEV_PRIORITY_NORMAL;
    
    		priorities = priorities_list;
    	}
    
    	for (i = 0; i < nb_links; i++)
    		if (queues[i] >= dev->data->nb_queues) {
    			rte_errno = EINVAL;
    			return 0;
    		}
    
    	diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id],
    						queues, priorities, nb_links);
    	if (diag < 0)
    		return diag;
    
    	links_map = dev->data->links_map;
    	/* Point links_map to this port specific area */
    	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
    	for (i = 0; i < diag; i++)
    		links_map[queues[i]] = (uint8_t)priorities[i];
    
    	return diag;
    }
    
    int
    rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
    		      uint8_t queues[], uint16_t nb_unlinks)
    {
    	struct rte_eventdev *dev;
    	uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
    	int i, diag, j;
    	uint16_t *links_map;
    
    	RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);
    	dev = &rte_eventdevs[dev_id];
    
    	if (*dev->dev_ops->port_unlink == NULL) {
    		RTE_EDEV_LOG_ERR("Function not supported");
    		rte_errno = ENOTSUP;
    		return 0;
    	}
    
    	if (!is_valid_port(dev, port_id)) {
    		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
    		rte_errno = EINVAL;
    		return 0;
    	}
    
    	links_map = dev->data->links_map;
    	/* Point links_map to this port specific area */
    	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
    
    	if (queues == NULL) {
    		j = 0;
    		for (i = 0; i < dev->data->nb_queues; i++) {
    			if (links_map[i] !=
    					EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
    				all_queues[j] = i;
    				j++;
    			}
    		}
    		queues = all_queues;
    	} else {
    		for (j = 0; j < nb_unlinks; j++) {
    			if (links_map[queues[j]] ==
    					EVENT_QUEUE_SERVICE_PRIORITY_INVALID)
    				break;
    		}
    	}
    
    	nb_unlinks = j;
    	for (i = 0; i < nb_unlinks; i++)
    		if (queues[i] >= dev->data->nb_queues) {
    			rte_errno = EINVAL;
    			return 0;
    		}
    
    	diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id],
    					queues, nb_unlinks);
    
    	if (diag < 0)
    		return diag;
    
    	for (i = 0; i < diag; i++)
    		links_map[queues[i]] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
    
    	return diag;
    }
    
    int
    rte_event_port_unlinks_in_progress(uint8_t dev_id, uint8_t port_id)
    {
    	struct rte_eventdev *dev;
    
    	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
    	dev = &rte_eventdevs[dev_id];
    	if (!is_valid_port(dev, port_id)) {
    		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
    		return -EINVAL;
    	}
    
    	/* Return 0 if the PMD does not implement unlinks in progress.
    	 * This allows PMDs which handle unlink synchronously to not implement
    	 * this function at all.
    	 */
    	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_unlinks_in_progress, 0);
    
    	return (*dev->dev_ops->port_unlinks_in_progress)(dev,
    			dev->data->ports[port_id]);
    }
    
    int
    rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
    			 uint8_t queues[], uint8_t priorities[])
    {
    	struct rte_eventdev *dev;
    	uint16_t *links_map;
    	int i, count = 0;
    
    	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
    	dev = &rte_eventdevs[dev_id];
    	if (!is_valid_port(dev, port_id)) {
    		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
    		return -EINVAL;
    	}
    
    	links_map = dev->data->links_map;
    	/* Point links_map to this port specific area */
    	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
    	for (i = 0; i < dev->data->nb_queues; i++) {
    		if (links_map[i] != EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
    			queues[count] = i;
    			priorities[count] = (uint8_t)links_map[i];
    			++count;
    		}
    	}
    	return count;
    }
    
    int
    rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
    				 uint64_t *timeout_ticks)
    {
    	struct rte_eventdev *dev;
    
    	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
    	dev = &rte_eventdevs[dev_id];
    	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timeout_ticks, -ENOTSUP);
    
    	if (timeout_ticks == NULL)
    		return -EINVAL;
    
    	return (*dev->dev_ops->timeout_ticks)(dev, ns, timeout_ticks);
    }
    ...
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190
    • 191
    • 192
    • 193
    • 194
    • 195
    • 196
    • 197
    • 198
    • 199
    • 200
    • 201
    • 202
    • 203
    • 204
    • 205
    • 206
    • 207
    • 208
    • 209
    • 210
    • 211
    • 212
    • 213
    • 214
    • 215
    • 216
    • 217
    • 218
    • 219
    • 220
    • 221
    • 222
    • 223
    • 224
    • 225
    • 226
    • 227
    • 228
    • 229
    • 230
    • 231
    • 232
    • 233
    • 234
    • 235
    • 236
    • 237
    • 238
    • 239
    • 240
    • 241
    • 242
    • 243
    • 244
    • 245
    • 246
    • 247
    • 248
    • 249
    • 250
    • 251
    • 252
    • 253
    • 254
    • 255
    • 256
    • 257
    • 258
    • 259
    • 260
    • 261
    • 262
    • 263
    • 264
    • 265
    • 266
    • 267
    • 268
    • 269
    • 270

    其实其核心的两组API一个在rte_eventdev.c和rte_service.c中,上面是前者,下面看看后者:

    
    
    #include "eal_private.h"
    
    #define RTE_SERVICE_NUM_MAX 64
    
    #define SERVICE_F_REGISTERED    (1 << 0)
    #define SERVICE_F_STATS_ENABLED (1 << 1)
    #define SERVICE_F_START_CHECK   (1 << 2)
    
    /* runstates for services and lcores, denoting if they are active or not */
    #define RUNSTATE_STOPPED 0
    #define RUNSTATE_RUNNING 1
    
    /* internal representation of a service */
    struct rte_service_spec_impl {
    	/* public part of the struct */
    	struct rte_service_spec spec;
    
    	/* atomic lock that when set indicates a service core is currently
    	 * running this service callback. When not set, a core may take the
    	 * lock and then run the service callback.
    	 */
    	rte_atomic32_t execute_lock;
    
    	/* API set/get-able variables */
    	int8_t app_runstate;
    	int8_t comp_runstate;
    	uint8_t internal_flags;
    
    	/* per service statistics */
    	/* Indicates how many cores the service is mapped to run on.
    	 * It does not indicate the number of cores the service is running
    	 * on currently.
    	 */
    	rte_atomic32_t num_mapped_cores;
    	uint64_t calls;
    	uint64_t cycles_spent;
    } __rte_cache_aligned;
    
    /* the internal values of a service core */
    struct core_state {
    	/* map of services IDs are run on this core */
    	uint64_t service_mask;
    	uint8_t runstate; /* running or stopped */
    	uint8_t is_service_core; /* set if core is currently a service core */
    	uint8_t service_active_on_lcore[RTE_SERVICE_NUM_MAX];
    	uint64_t loops;
    	uint64_t calls_per_service[RTE_SERVICE_NUM_MAX];
    } __rte_cache_aligned;
    
    static uint32_t rte_service_count;
    static struct rte_service_spec_impl *rte_services;
    static struct core_state *lcore_states;
    static uint32_t rte_service_library_initialized;
    
    int32_t
    rte_service_init(void)
    {
    	if (rte_service_library_initialized) {
    		RTE_LOG(NOTICE, EAL,
    			"service library init() called, init flag %d\n",
    			rte_service_library_initialized);
    		return -EALREADY;
    	}
    
    	rte_services = rte_calloc("rte_services", RTE_SERVICE_NUM_MAX,
    			sizeof(struct rte_service_spec_impl),
    			RTE_CACHE_LINE_SIZE);
    	if (!rte_services) {
    		RTE_LOG(ERR, EAL, "error allocating rte services array\n");
    		goto fail_mem;
    	}
    
    	lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
    			sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
    	if (!lcore_states) {
    		RTE_LOG(ERR, EAL, "error allocating core states array\n");
    		goto fail_mem;
    	}
    
    	int i;
    	struct rte_config *cfg = rte_eal_get_configuration();
    	for (i = 0; i < RTE_MAX_LCORE; i++) {
    		if (lcore_config[i].core_role == ROLE_SERVICE) {
    			if ((unsigned int)i == cfg->master_lcore)
    				continue;
    			rte_service_lcore_add(i);
    		}
    	}
    
    	rte_service_library_initialized = 1;
    	return 0;
    fail_mem:
    	rte_free(rte_services);
    	rte_free(lcore_states);
    	return -ENOMEM;
    }
    
    ......
    static int32_t
    service_runner_func(void *arg)
    {
    	RTE_SET_USED(arg);
    	uint32_t i;
    	const int lcore = rte_lcore_id();
    	struct core_state *cs = &lcore_states[lcore];
    
    	while (cs->runstate == RUNSTATE_RUNNING) {
    		const uint64_t service_mask = cs->service_mask;
    
    		for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
    			if (!service_valid(i))
    				continue;
    			/* return value ignored as no change to code flow */
    			service_run(i, cs, service_mask, service_get(i), 1);
    		}
    
    		cs->loops++;
    
    		rte_smp_rmb();
    	}
    
    	/* Switch off this core for all services, to ensure that future
    	 * calls to may_be_active() know this core is switched off.
    	 */
    	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++)
    		cs->service_active_on_lcore[i] = 0;
    
    	return 0;
    }
    
    int32_t
    rte_service_lcore_count(void)
    {
    	int32_t count = 0;
    	uint32_t i;
    	for (i = 0; i < RTE_MAX_LCORE; i++)
    		count += lcore_states[i].is_service_core;
    	return count;
    }
    
    int32_t
    rte_service_lcore_list(uint32_t array[], uint32_t n)
    {
    	uint32_t count = rte_service_lcore_count();
    	if (count > n)
    		return -ENOMEM;
    
    	if (!array)
    		return -EINVAL;
    
    	uint32_t i;
    	uint32_t idx = 0;
    	for (i = 0; i < RTE_MAX_LCORE; i++) {
    		struct core_state *cs = &lcore_states[i];
    		if (cs->is_service_core) {
    			array[idx] = i;
    			idx++;
    		}
    	}
    
    	return count;
    }
    
    int32_t
    rte_service_lcore_count_services(uint32_t lcore)
    {
    	if (lcore >= RTE_MAX_LCORE)
    		return -EINVAL;
    
    	struct core_state *cs = &lcore_states[lcore];
    	if (!cs->is_service_core)
    		return -ENOTSUP;
    
    	return __builtin_popcountll(cs->service_mask);
    }
    
    int32_t
    rte_service_start_with_defaults(void)
    {
    	/* create a default mapping from cores to services, then start the
    	 * services to make them transparent to unaware applications.
    	 */
    	uint32_t i;
    	int ret;
    	uint32_t count = rte_service_get_count();
    
    	int32_t lcore_iter = 0;
    	uint32_t ids[RTE_MAX_LCORE] = {0};
    	int32_t lcore_count = rte_service_lcore_list(ids, RTE_MAX_LCORE);
    
    	if (lcore_count == 0)
    		return -ENOTSUP;
    
    	for (i = 0; (int)i < lcore_count; i++)
    		rte_service_lcore_start(ids[i]);
    
    	for (i = 0; i < count; i++) {
    		/* do 1:1 core mapping here, with each service getting
    		 * assigned a single core by default. Adding multiple services
    		 * should multiplex to a single core, or 1:1 if there are the
    		 * same amount of services as service-cores
    		 */
    		ret = rte_service_map_lcore_set(i, ids[lcore_iter], 1);
    		if (ret)
    			return -ENODEV;
    
    		lcore_iter++;
    		if (lcore_iter >= lcore_count)
    			lcore_iter = 0;
    
    		ret = rte_service_runstate_set(i, 1);
    		if (ret)
    			return -ENOEXEC;
    	}
    
    	return 0;
    }
    
    static int32_t
    service_update(struct rte_service_spec *service, uint32_t lcore,
    		uint32_t *set, uint32_t *enabled)
    {
    	uint32_t i;
    	int32_t sid = -1;
    
    	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
    		if ((struct rte_service_spec *)&rte_services[i] == service &&
    				service_valid(i)) {
    			sid = i;
    			break;
    		}
    	}
    
    	if (sid == -1 || lcore >= RTE_MAX_LCORE)
    		return -EINVAL;
    
    	if (!lcore_states[lcore].is_service_core)
    		return -EINVAL;
    
    	uint64_t sid_mask = UINT64_C(1) << sid;
    	if (set) {
    		uint64_t lcore_mapped = lcore_states[lcore].service_mask &
    			sid_mask;
    
    		if (*set && !lcore_mapped) {
    			lcore_states[lcore].service_mask |= sid_mask;
    			rte_atomic32_inc(&rte_services[sid].num_mapped_cores);
    		}
    		if (!*set && lcore_mapped) {
    			lcore_states[lcore].service_mask &= ~(sid_mask);
    			rte_atomic32_dec(&rte_services[sid].num_mapped_cores);
    		}
    	}
    
    	if (enabled)
    		*enabled = !!(lcore_states[lcore].service_mask & (sid_mask));
    
    	rte_smp_wmb();
    
    	return 0;
    }
    
    int32_t
    rte_service_map_lcore_set(uint32_t id, uint32_t lcore, uint32_t enabled)
    {
    	struct rte_service_spec_impl *s;
    	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
    	uint32_t on = enabled > 0;
    	return service_update(&s->spec, lcore, &on, 0);
    }
    
    int32_t
    rte_service_map_lcore_get(uint32_t id, uint32_t lcore)
    {
    	struct rte_service_spec_impl *s;
    	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
    	uint32_t enabled;
    	int ret = service_update(&s->spec, lcore, 0, &enabled);
    	if (ret == 0)
    		return enabled;
    	return ret;
    }
    
    static void
    set_lcore_state(uint32_t lcore, int32_t state)
    {
    	/* mark core state in hugepage backed config */
    	struct rte_config *cfg = rte_eal_get_configuration();
    	cfg->lcore_role[lcore] = state;
    
    	/* mark state in process local lcore_config */
    	lcore_config[lcore].core_role = state;
    
    	/* update per-lcore optimized state tracking */
    	lcore_states[lcore].is_service_core = (state == ROLE_SERVICE);
    }
    
    int32_t
    rte_service_lcore_reset_all(void)
    {
    	/* loop over cores, reset all to mask 0 */
    	uint32_t i;
    	for (i = 0; i < RTE_MAX_LCORE; i++) {
    		if (lcore_states[i].is_service_core) {
    			lcore_states[i].service_mask = 0;
    			set_lcore_state(i, ROLE_RTE);
    			lcore_states[i].runstate = RUNSTATE_STOPPED;
    		}
    	}
    	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++)
    		rte_atomic32_set(&rte_services[i].num_mapped_cores, 0);
    
    	rte_smp_wmb();
    
    	return 0;
    }
    
    int32_t
    rte_service_lcore_add(uint32_t lcore)
    {
    	if (lcore >= RTE_MAX_LCORE)
    		return -EINVAL;
    	if (lcore_states[lcore].is_service_core)
    		return -EALREADY;
    
    	set_lcore_state(lcore, ROLE_SERVICE);
    
    	/* ensure that after adding a core the mask and state are defaults */
    	lcore_states[lcore].service_mask = 0;
    	lcore_states[lcore].runstate = RUNSTATE_STOPPED;
    
    	rte_smp_wmb();
    
    	return rte_eal_wait_lcore(lcore);
    }
    
    int32_t
    rte_service_lcore_del(uint32_t lcore)
    {
    	if (lcore >= RTE_MAX_LCORE)
    		return -EINVAL;
    
    	struct core_state *cs = &lcore_states[lcore];
    	if (!cs->is_service_core)
    		return -EINVAL;
    
    	if (cs->runstate != RUNSTATE_STOPPED)
    		return -EBUSY;
    
    	set_lcore_state(lcore, ROLE_RTE);
    
    	rte_smp_wmb();
    	return 0;
    }
    
    int32_t
    rte_service_lcore_start(uint32_t lcore)
    {
    	if (lcore >= RTE_MAX_LCORE)
    		return -EINVAL;
    
    	struct core_state *cs = &lcore_states[lcore];
    	if (!cs->is_service_core)
    		return -EINVAL;
    
    	if (cs->runstate == RUNSTATE_RUNNING)
    		return -EALREADY;
    
    	/* set core to run state first, and then launch otherwise it will
    	 * return immediately as runstate keeps it in the service poll loop
    	 */
    	cs->runstate = RUNSTATE_RUNNING;
    
    	int ret = rte_eal_remote_launch(service_runner_func, 0, lcore);
    	/* returns -EBUSY if the core is already launched, 0 on success */
    	return ret;
    }
    
    ......
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190
    • 191
    • 192
    • 193
    • 194
    • 195
    • 196
    • 197
    • 198
    • 199
    • 200
    • 201
    • 202
    • 203
    • 204
    • 205
    • 206
    • 207
    • 208
    • 209
    • 210
    • 211
    • 212
    • 213
    • 214
    • 215
    • 216
    • 217
    • 218
    • 219
    • 220
    • 221
    • 222
    • 223
    • 224
    • 225
    • 226
    • 227
    • 228
    • 229
    • 230
    • 231
    • 232
    • 233
    • 234
    • 235
    • 236
    • 237
    • 238
    • 239
    • 240
    • 241
    • 242
    • 243
    • 244
    • 245
    • 246
    • 247
    • 248
    • 249
    • 250
    • 251
    • 252
    • 253
    • 254
    • 255
    • 256
    • 257
    • 258
    • 259
    • 260
    • 261
    • 262
    • 263
    • 264
    • 265
    • 266
    • 267
    • 268
    • 269
    • 270
    • 271
    • 272
    • 273
    • 274
    • 275
    • 276
    • 277
    • 278
    • 279
    • 280
    • 281
    • 282
    • 283
    • 284
    • 285
    • 286
    • 287
    • 288
    • 289
    • 290
    • 291
    • 292
    • 293
    • 294
    • 295
    • 296
    • 297
    • 298
    • 299
    • 300
    • 301
    • 302
    • 303
    • 304
    • 305
    • 306
    • 307
    • 308
    • 309
    • 310
    • 311
    • 312
    • 313
    • 314
    • 315
    • 316
    • 317
    • 318
    • 319
    • 320
    • 321
    • 322
    • 323
    • 324
    • 325
    • 326
    • 327
    • 328
    • 329
    • 330
    • 331
    • 332
    • 333
    • 334
    • 335
    • 336
    • 337
    • 338
    • 339
    • 340
    • 341
    • 342
    • 343
    • 344
    • 345
    • 346
    • 347
    • 348
    • 349
    • 350
    • 351
    • 352
    • 353
    • 354
    • 355
    • 356
    • 357
    • 358
    • 359
    • 360
    • 361
    • 362
    • 363
    • 364
    • 365
    • 366
    • 367
    • 368
    • 369
    • 370
    • 371
    • 372
    • 373
    • 374
    • 375
    • 376
    • 377
    • 378
    • 379
    • 380
    • 381
    • 382

    RTC及算法相关代码可自行在源码中查找,这里不再赘述。

    五、总结

    学习这种功能知识点,最重要的是把握整体逻辑和处理流程。算法和框架可以先放到一边,待了解清楚整体流程后,再深入到其中进行学习,能更好的理解和掌握相关的知识体系。学习要有学习方法,要有清晰的思路。万不可一上来就陷入细节,出力甚多却所得甚少。

  • 相关阅读:
    如何写测试方案
    关于跨域问题详解
    2021-06-15 51单片机c语言秒表的仿真ISIS7 professional
    Vue的生命周期函数
    2023年真无线蓝牙耳机品牌有哪些推荐?无线蓝牙耳机选购指南
    网络安全(黑客技术)自学笔记
    LeetBook新手村题单
    LeetCode:3. 无重复字符的最长子串
    ENVI: 如何创建GLT文件并基于GLT对图像进行几何校正?
    Latex 写论文排版方法(vscode)
  • 原文地址:https://blog.csdn.net/fpcc/article/details/134481648