• dpdk 将arp和icmp数据包交给kni 实现ping


    kni

            dpdk在收到数据包之后,对于我们不想处理的包,可以将数据包交给kni,kni会将数据包交由内核协议栈处理,内核协议栈处理完数据包后会再次将数据包交给kni,然后我们把数据包从kni设备读出来即可。
            本案例中,将ping过程中需要的arp包和icmp包交由kni处理,借此实现ping(好像arp包在开发中也是常常不被处理的 )。

        初始化环境

            和之前差不多,初始化eal环境、注册终止信号、初始化内存池、初始化各设备等。

            因为需要用到kni,所以还需要配置kni

                   1.初始化kni

    rte_kni_init(0);
    // 老师说这个参数没用 QAQ 
    
    • 1
    • 2

            dpdk为我们提供了操作kni的接口,我习惯称之为句柄。现在需要获取操作kni的句柄。

    rte_kni_alloc(rte_mempool *pktmbuf_pool, const rte_kni_conf *conf, rte_kni_ops *ops);
    // 该函数可以获取一个kni句柄
    
    • 1
    • 2

            第一个参数是内存池,第二个参数是kni的配置项,第三个参数是操作配置项。这里将它封装成一个函数。

    static struct rte_kni *ng_alloc_kni(struct rte_mempool *mbuf_p, uint16_t port) {
        struct rte_kni_conf kconf;
        struct rte_kni_ops kops;
    
        memset(&kconf, 0, sizeof(kconf));
        memset(&kops, 0, sizeof(kops));
    
        snprintf(kconf.name, RTE_KNI_NAMESIZE, "vEth%u", port);
        kconf.group_id = port;
        kconf.mbuf_size = 1024;
        rte_eth_macaddr_get(port, (struct rte_ether_addr *)&kconf.mac_addr);
        rte_eth_dev_get_mtu(port, &kconf.mtu);
    
        kops.config_network_if = ng_config_net_ifup;
        // config_network_if是一个函数指针
    
        return rte_kni_alloc(mbuf_p, &kconf, &kops);
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18

            外部通过这个函数的调用就可以获得操作这个kni设备的句柄。
                   2.获取操作kni的句柄

    handlers[port_id] = ng_alloc_kni(mem_pool, port_id);
    
    • 1

                   3.主循环

    while (!quit) {
        struct rte_mbuf *mbuf[MAX_PKT_BURST];
        struct rte_ether_hdr *hdr;
        uint j;
        uint16_t port_id;
    
        RTE_ETH_FOREACH_DEV(port_id) {
            struct rte_ether_hdr *ether_hdr;
            struct rte_ipv4_hdr *ip_hdr;
            struct rte_icmp_hdr *icmp_hdr;
            struct rte_arp_hdr *arp_hdr;
    
            rte_kni_handle_request(handlers[port_id]);
            // 从网卡读数据
            sz = rte_eth_rx_burst(port_id, 0, mbuf, MAX_PKT_BURST);
            FOR (k, 0, sz) {
                ether_hdr = rte_pktmbuf_mtod(mbuf[k], struct rte_ether_hdr *);
                if (ether_hdr->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP)) {
                    if (rte_kni_tx_burst(handlers[port_id], &mbuf[k], 1) <= 0) {
                        rte_pktmbuf_free(mbuf[k]);
                        // 写到kni
                    }
                }
                else if (ether_hdr->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) {
                    ip_hdr = rte_pktmbuf_mtod_offset(mbuf[k], struct rte_ipv4_hdr *, sizeof(struct rte_ether_hdr));
                    if (ip_hdr->dst_addr == LOCAL_IP && ip_hdr->next_proto_id == IPPROTO_ICMP) {
                        rte_kni_tx_burst(handlers[port_id], &mbuf[k], 1);
                    }
                    else rte_pktmbuf_free(mbuf[k]);
                }
                else rte_pktmbuf_free(mbuf[k]);
            }
    
            sz = rte_kni_rx_burst(handlers[port_id], mbuf, MAX_PKT_BURST);
            // 从kni拿数据
            FOR (k, 0, sz) {
                ether_hdr = rte_pktmbuf_mtod(mbuf[k], struct rte_ether_hdr *);
                if (ether_hdr->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP)) {
                    if (rte_eth_tx_burst(port_id, 0, &mbuf[k], 1) <= 0) {
                        rte_pktmbuf_free(mbuf[k]);
                        // 写回网卡
                    }
                }
                else if (ether_hdr->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) {
                    ip_hdr = rte_pktmbuf_mtod_offset(mbuf[k], struct rte_ipv4_hdr *,
                            sizeof(struct rte_ether_hdr));
                    if (ip_hdr->src_addr == LOCAL_IP && ip_hdr->next_proto_id == IPPROTO_ICMP) {
                        rte_eth_tx_burst(port_id, 0, &mbuf[k], 1);
                    }
                    else rte_pktmbuf_free(mbuf[k]);
                }
                else rte_pktmbuf_free(mbuf[k]);
          	}
        }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54

                   4.程序启动后,kni会产生一个名为 {vEth+网卡编号} 的虚拟网卡,需要为其配置ip

    ifconfig vEth0 192.168.13.144 netmask 255.255.255.0 broadcast 192.168.13.255
    #我的是vEth0
    
    • 1
    • 2

    在这里插入图片描述

            上述代码只处理了arp和icmp。

    问题

        调试的时候,主循环内是空的,依然ping通了

         1.同一台机器上ping的时候可能不会走这个网卡
         2.从另一台机器ping的时候,依然可以ping通,把另一个同一网段的NAT网卡停了恢复正常(终于ping不通了)

        一开始可以ping通,过一小会儿就ping不通了

    在这里插入图片描述
              抓包后看到是icmp包收不到应答,于是又看了以下ip配置,发现ip地址没了
    在这里插入图片描述
              学长说是因为我开了dhcp,然后dhcp又被拦截了,时间到期后又没办法重新获取,所以ping不通。将dhcp改为静态(手动),把ip写死即可。

    完整代码

    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>
    #include <stdint.h>
    #include <sys/types.h>
    #include <signal.h>
    #include <stdbool.h>
    #include <unistd.h>
    #include <fcntl.h>
    
    #include <rte_common.h>
    #include <rte_memcpy.h>
    #include <rte_eal.h>
    #include <rte_lcore.h>
    #include <rte_interrupts.h>
    #include <rte_ether.h>
    #include <rte_ethdev.h>
    #include <rte_mempool.h>
    #include <rte_mbuf.h>
    #include <rte_kni.h>
    
    #define RTE_EXIT(x) rte_exit(EXIT_FAILURE, "error at %s\n", x)
    #define MEM_CACHE_SIZE 256
    #define MAX_PKT_BURST 32
    #define MAX_PORTS 32
    #define MAX_CORES 8
    
    uint16_t IP_TYPE;
    uint16_t ARP_TYPE;
    int enable_cores = 1;
    const int nb_rxq = 1;
    const int nb_txq = 1;
    
    #define ERROR 1
    
    #define EN_CS enable_cores
    
    static struct rte_kni *handlers[MAX_PORTS];
    
    
    const char *local_ip_str = "192.168.13.144";
    static struct in_addr local;
    // 本机IP的字节码
    long lip;
    //#define LOCAL_IP inet_addr(local_ip_str)
    #define LOCAL_IP lip
    
    // 启用混杂模式
    #define promiscuous_on
    
    /**
     * @brief for loop range of [begin, end - 1]
     */
    #define FOR(idx, begin, end) for (int idx = begin; idx < end; idx++)
    
    /**
     * @brief reverse for range of [begin + 1, end]
     */
    #define ROF(i, end, begin) for (int i = end; i > begin; i--)
    
    
    #define enable_kni 1
    
    static bool quit = false;
    
    
    static uint8_t default_rss_key_40bytes[] = {
            0xd1, 0x81, 0xc6, 0x2c, 0xf7, 0xf4, 0xdb, 0x5b,
            0x19, 0x83, 0xa2, 0xfc, 0x94, 0x3e, 0x1a, 0xdb,
            0xd9, 0x38, 0x9e, 0x6b, 0xd1, 0x03, 0x9c, 0x2c,
            0xa7, 0x44, 0x99, 0xad, 0x59, 0x3d, 0x56, 0xd9,
            0xf3, 0x25, 0x3c, 0x06, 0x2a, 0xdc, 0x1f, 0xfc
    };
    static uint8_t rss_intel_key[] = {
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A
    };
    static uint8_t default_rss_key_52bytes[] = {
            0x44, 0x39, 0x79, 0x6b, 0xb5, 0x4c, 0x50, 0x23,
            0xb6, 0x75, 0xea, 0x5b, 0x12, 0x4f, 0x9f, 0x30,
            0xb8, 0xa2, 0xc0, 0x3d, 0xdf, 0xdc, 0x4d, 0x02,
            0xa0, 0x8c, 0x9b, 0x33, 0x4a, 0xf6, 0x4a, 0x4c,
            0x05, 0xc6, 0xfa, 0x34, 0x39, 0x58, 0xd8, 0x55,
            0x7d, 0x99, 0x58, 0x3a, 0xe1, 0x38, 0xc9, 0x2e,
            0x81, 0x15, 0x03, 0x66
    };
    
    static uint16_t nb_rxd = 1024;
    static uint16_t nb_txd = 1024;
    
    int core_to_rx_queue[MAX_CORES];
    int core_to_tx_queue[MAX_CORES];
    static struct rte_eth_conf port_conf = {
            .rxmode = {
                    .mq_mode = ETH_MQ_RX_RSS,
                    .split_hdr_size = 0,
            },
            .txmode = {
                    .mq_mode = RTE_ETH_MQ_TX_NONE,
            },
            .rx_adv_conf = {
                    .rss_conf = {
                            .rss_key = default_rss_key_40bytes,
                            .rss_key_len = 40,
                            .rss_hf = RTE_ETH_RSS_PROTO_MASK,
                    },
            },
    };
    
    // 每个设备对应的MAC地址
    static struct rte_ether_addr ether_address[MAX_PORTS];
    
    // 注册了一些终止信号
    static void signal_handler(int num) {
        if (num == SIGINT || num == SIGTERM) {
            printf("\n\nSignal %d received, preparing to exit...\n", num);
            quit = true;
        }
    }
    
    static uint16_t checksum(uint16_t *addr, int count) {
        long sum = 0;
        while (count > 1) {
            sum += *(ushort *)addr++;
            count -= 2;
        }
        if (count > 0) sum += *(u_char *)addr;
        while (sum >> 16) sum = (sum & 0xffff) + (sum >> 16);
        return ~sum;
    }
    
    #if enable_kni
    static int ng_config_net_ifup(uint16_t port_id, uint8_t if_up) {
        if (!rte_eth_dev_is_valid_port(port_id)) return -EINVAL;
        int ret;
        if (if_up) {
            rte_eth_dev_stop(port_id);
            ret = rte_eth_dev_start(port_id);
        }
        else {
            ret = rte_eth_dev_stop(port_id);
        }
        return ret;
    }
    
    static struct rte_kni *ng_alloc_kni(struct rte_mempool *mbuf_p, uint16_t port) {
        struct rte_kni_conf kconf;
        struct rte_kni_ops kops;
    
        memset(&kconf, 0, sizeof(kconf));
        memset(&kops, 0, sizeof(kops));
    
        snprintf(kconf.name, RTE_KNI_NAMESIZE, "vEth%u", port);
        kconf.group_id = port;
        kconf.mbuf_size = 1024;
        rte_eth_macaddr_get(port, (struct rte_ether_addr *)&kconf.mac_addr);
        rte_eth_dev_get_mtu(port, &kconf.mtu);
    
        kops.config_network_if = ng_config_net_ifup;
    
        return rte_kni_alloc(mbuf_p, &kconf, &kops);
    }
    #endif
    
    static void *keep_watch_on_mem_pool_status(void *arg) {
        uint avail = 0;
        uint in_use = 0;
        struct rte_mempool *mp = (struct rte_mempool *)arg;
        while (!quit) {
            avail = rte_mempool_avail_count(mp);
            in_use = rte_mempool_in_use_count(mp);
            printf("\n\n***********************************\n");
            printf("mempool avail count is %d\n", avail);
            printf("mempool in_use is %d\n", in_use);
            printf("***********************************\n\n");
            sleep(1);
        }
        return NULL;
    }
    
    static int create_a_new_thread(struct rte_mempool *mp) {
        pthread_t pid;
        return rte_ctrl_thread_create(&pid, "kp_wch_on_s", NULL, keep_watch_on_mem_pool_status, mp);
    }
    
    
    static int task_per_logical_core(void *args) {
        uint lcore_id = rte_lcore_id();
        struct rte_mempool *mem_pool = (struct rte_mempool *)args;
        int ret;
        uint16_t sz;
        while (!quit) {
            struct rte_mbuf *mbuf[MAX_PKT_BURST];
            struct rte_ether_hdr *hdr;
            uint j;
            uint16_t port_id;
    
            RTE_ETH_FOREACH_DEV(port_id) {
                struct rte_ether_hdr *ether_hdr;
                struct rte_ipv4_hdr *ip_hdr;
                struct rte_icmp_hdr *icmp_hdr;
                struct rte_arp_hdr *arp_hdr;
    
                rte_kni_handle_request(handlers[port_id]);
    
                sz = rte_eth_rx_burst(port_id, 0, mbuf, MAX_PKT_BURST);
                FOR (k, 0, sz) {
                    ether_hdr = rte_pktmbuf_mtod(mbuf[k], struct rte_ether_hdr *);
                    if (ether_hdr->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP)) {
                        if (rte_kni_tx_burst(handlers[port_id], &mbuf[k], 1) <= 0) {
                            rte_pktmbuf_free(mbuf[k]);
                        }
                    }
                    else if (ether_hdr->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) {
                        ip_hdr = rte_pktmbuf_mtod_offset(mbuf[k], struct rte_ipv4_hdr *, sizeof(struct rte_ether_hdr));
                        if (ip_hdr->dst_addr == LOCAL_IP && ip_hdr->next_proto_id == IPPROTO_ICMP) {
                            rte_kni_tx_burst(handlers[port_id], &mbuf[k], 1);
                        }
                        else rte_pktmbuf_free(mbuf[k]);
                    }
                    else rte_pktmbuf_free(mbuf[k]);
                }
    
                sz = rte_kni_rx_burst(handlers[port_id], mbuf, MAX_PKT_BURST);
                FOR (k, 0, sz) {
                    ether_hdr = rte_pktmbuf_mtod(mbuf[k], struct rte_ether_hdr *);
                    if (ether_hdr->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP)) {
                        if (rte_eth_tx_burst(port_id, 0, &mbuf[k], 1) <= 0) {
                            rte_pktmbuf_free(mbuf[k]);
                        }
                    }
                    else if (ether_hdr->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) {
                        ip_hdr = rte_pktmbuf_mtod_offset(mbuf[k], struct rte_ipv4_hdr *,
                                sizeof(struct rte_ether_hdr));
                        if (ip_hdr->src_addr == LOCAL_IP && ip_hdr->next_proto_id == IPPROTO_ICMP) {
                            rte_eth_tx_burst(port_id, 0, &mbuf[k], 1);
                        }
                        else rte_pktmbuf_free(mbuf[k]);
                    }
                    else rte_pktmbuf_free(mbuf[k]);
                }
            }
        }
        return 0;
    }
    
    int main(int argc, char **argv) {
        setbuf(stdout, NULL);
    
        IP_TYPE = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
        ARP_TYPE = rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP);
        lip = inet_addr(local_ip_str);
    
    
        int ret;
        local.s_addr = LOCAL_IP;
        uint16_t nb_ports_avail, port_id;
    
        // TODO
        static struct rte_mempool *mem_pool;
    
        ret = rte_eal_init(argc, argv);
        if (ret < 0) RTE_EXIT("rte_eal_init()");
    
        signal(SIGINT, signal_handler);
        signal(SIGTERM, signal_handler);
    
        nb_ports_avail = rte_eth_dev_count_avail();
    
        if (nb_ports_avail <= 0) RTE_EXIT("rte_eth_dev_count_avail");
    
        uint nb_mbuf = RTE_MAX(EN_CS * (nb_rxd + nb_txd + MAX_PKT_BURST + 1 * MEM_CACHE_SIZE),
                               8192U);
    
    
        mem_pool = NULL;
        mem_pool = rte_pktmbuf_pool_create("mbuf", nb_mbuf, MEM_CACHE_SIZE, 0,
                                           RTE_MBUF_DEFAULT_BUF_SIZE, (int)rte_socket_id());
        if (mem_pool == NULL) {
            RTE_EXIT("rte_pktmbuf_pool_create");
        }
    
        rte_kni_init(0);
    
    
        RTE_ETH_FOREACH_DEV(port_id) {
            struct rte_eth_dev_info dev_info;
            struct rte_eth_conf local_conf = port_conf;
            struct rte_eth_rxconf rxconf;
            struct rte_eth_txconf txconf;
            struct rte_ether_addr *addr;
    
            ret = rte_eth_macaddr_get(port_id, &ether_address[port_id]);
            if (ret) RTE_EXIT("rte_eth_macaddr_get");
    
            addr = (struct rte_ether_addr *)malloc(sizeof(struct rte_ether_addr));
    
            ret = rte_eth_dev_info_get(port_id, &dev_info);
            if (ret < 0) RTE_EXIT("rte_eth_dev_info_get");
    
            // TODO
    
            if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
                local_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
    
    
            local_conf.rx_adv_conf.rss_conf.rss_hf &=
                    dev_info.flow_type_rss_offloads;
    
    
            ret = rte_eth_dev_configure(port_id, nb_rxq, nb_txq, &local_conf);
            if (ret) RTE_EXIT("rte_eth_dev_configure");
    
            ret = rte_eth_dev_adjust_nb_rx_tx_desc(port_id, &nb_rxd, &nb_txd);
            if (ret) RTE_EXIT("rte_eth_dev_adjust_nb_rx_tx_desc");
    
            ret = rte_eth_macaddr_get(port_id, addr);
            if (ret) RTE_EXIT("rte_eth_macaddr_get");
    
            fflush(stdout);
            rxconf = dev_info.default_rxconf;
            rxconf.offloads = local_conf.rxmode.offloads;
    
            for (int i = 0; i < nb_rxq; i++) {
                fflush(stdout);
                rxconf = dev_info.default_rxconf;
                rxconf.offloads = local_conf.rxmode.offloads;
                ret = rte_eth_rx_queue_setup(port_id, i, nb_rxd, rte_eth_dev_socket_id(port_id), &rxconf, mem_pool);
                if (ret) rte_exit(EXIT_FAILURE, "the %dth core occurs an error during its rx_queue setting up", i);
            }
    
            fflush(stdout);
            txconf = dev_info.default_txconf;
            txconf.offloads = local_conf.txmode.offloads;
    
            for (int i = 0; i < nb_txq; i++) {
                fflush(stdout);
                txconf = dev_info.default_txconf;
                txconf.offloads = local_conf.txmode.offloads;
                ret = rte_eth_tx_queue_setup(port_id, i, nb_txd, rte_eth_dev_socket_id(port_id), &txconf);
                if (ret) rte_exit(EXIT_FAILURE, "the %dth core occurs an error during its tx_queue setting up\n", i);
            }
    
    
    #ifdef promiscuous_on
            rte_eth_promiscuous_enable(port_id);
    #endif
    
            ret = rte_eth_dev_start(port_id);
            if (ret) RTE_EXIT("rte_eth_dev_start");
    
            handlers[port_id] = ng_alloc_kni(mem_pool, port_id);
            if (handlers[port_id] == NULL) rte_exit(EXIT_FAILURE, "error on ng_alloc_kni\n\n");
    
        }
    
    //    system("clear");
    
    #ifdef enable_guard
        ret = create_a_new_thread(mem_pool);
        if (ret) RTE_EXIT("create a new thread");
    #endif
    
        uint lcore_id = 0;
        memset(core_to_rx_queue, -1, sizeof(core_to_rx_queue));
        memset(core_to_tx_queue, -1, sizeof(core_to_tx_queue));
        uint nb_rx_queues = 0;
        uint nb_tx_queues = 0;
        while (lcore_id < EN_CS) {
            if (rte_lcore_is_enabled(lcore_id)) {
                if ((nb_rx_queues < nb_rxq) && (!(~core_to_rx_queue[lcore_id])))core_to_rx_queue[lcore_id] = nb_rx_queues++;
                if ((nb_tx_queues < nb_txq) && (!(~core_to_tx_queue[lcore_id])))core_to_tx_queue[lcore_id] = nb_tx_queues++;
            }
            lcore_id++;
        }
    
        // TODO
        void *args;
        args = (void *)mem_pool;
    //    printf("main core is %u\n\n", rte_get_main_lcore());
        //rte_eal_mp_remote_launch(task_per_logical_core, args, CALL_MAIN);
    //	for (int k = 0; k < 4; k++) {
    //		if (k == rte_get_main_lcore()) continue;
    //		rte_eal_remote_launch(task_per_logical_core, args, k);
    //	}
    
        create_a_new_thread(mem_pool);
        task_per_logical_core(args);
    
    //    RTE_LCORE_FOREACH_WORKER(lcore_id) {
    //        if (lcore_id >= 4) continue;
    //        if (rte_eal_wait_lcore(lcore_id)) {
    //            ret = -1;
    //            break;
    //        }
    //    }
    
        RTE_ETH_FOREACH_DEV(port_id) {
            ret = rte_eth_dev_stop(port_id);
            if (ret) RTE_EXIT("rte_eth_dev_stop");
    
            ret = rte_eth_dev_close(port_id);
            if (ret) RTE_EXIT("rte_eth_dev_close");
        }
    
        ret = rte_eal_cleanup();
        if (ret) RTE_EXIT("rte_eal_cleanup");
    
        return ret;
    }
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190
    • 191
    • 192
    • 193
    • 194
    • 195
    • 196
    • 197
    • 198
    • 199
    • 200
    • 201
    • 202
    • 203
    • 204
    • 205
    • 206
    • 207
    • 208
    • 209
    • 210
    • 211
    • 212
    • 213
    • 214
    • 215
    • 216
    • 217
    • 218
    • 219
    • 220
    • 221
    • 222
    • 223
    • 224
    • 225
    • 226
    • 227
    • 228
    • 229
    • 230
    • 231
    • 232
    • 233
    • 234
    • 235
    • 236
    • 237
    • 238
    • 239
    • 240
    • 241
    • 242
    • 243
    • 244
    • 245
    • 246
    • 247
    • 248
    • 249
    • 250
    • 251
    • 252
    • 253
    • 254
    • 255
    • 256
    • 257
    • 258
    • 259
    • 260
    • 261
    • 262
    • 263
    • 264
    • 265
    • 266
    • 267
    • 268
    • 269
    • 270
    • 271
    • 272
    • 273
    • 274
    • 275
    • 276
    • 277
    • 278
    • 279
    • 280
    • 281
    • 282
    • 283
    • 284
    • 285
    • 286
    • 287
    • 288
    • 289
    • 290
    • 291
    • 292
    • 293
    • 294
    • 295
    • 296
    • 297
    • 298
    • 299
    • 300
    • 301
    • 302
    • 303
    • 304
    • 305
    • 306
    • 307
    • 308
    • 309
    • 310
    • 311
    • 312
    • 313
    • 314
    • 315
    • 316
    • 317
    • 318
    • 319
    • 320
    • 321
    • 322
    • 323
    • 324
    • 325
    • 326
    • 327
    • 328
    • 329
    • 330
    • 331
    • 332
    • 333
    • 334
    • 335
    • 336
    • 337
    • 338
    • 339
    • 340
    • 341
    • 342
    • 343
    • 344
    • 345
    • 346
    • 347
    • 348
    • 349
    • 350
    • 351
    • 352
    • 353
    • 354
    • 355
    • 356
    • 357
    • 358
    • 359
    • 360
    • 361
    • 362
    • 363
    • 364
    • 365
    • 366
    • 367
    • 368
    • 369
    • 370
    • 371
    • 372
    • 373
    • 374
    • 375
    • 376
    • 377
    • 378
    • 379
    • 380
    • 381
    • 382
    • 383
    • 384
    • 385
    • 386
    • 387
    • 388
    • 389
    • 390
    • 391
    • 392
    • 393
    • 394
    • 395
    • 396
    • 397
    • 398
    • 399
    • 400
    • 401
    • 402
    • 403
    • 404
    • 405
    • 406
    • 407
    • 408
    • 409
    • 410
    • 411
    • 412
    • 413
    • 414
  • 相关阅读:
    小鼠参考基因组id转换gtf文件chb 自己注释 从官网下载相应的基因中注释文件gse155802 getmatrix genecode
    SSM实战-复选框(checkbox)多标签分类的添加、标签回显与修改
    【Lychee图床】本地电脑搭建私人图床,公网远程访问
    电机控制从入门到吹牛
    Java岗大厂面试百日冲刺 - 日积月累,每日三题【Day5】 —— 基础篇2
    请给系统加个【消息中心】功能,因为真的很简单
    SpringMvc与SpringBoot有什么不同?
    【Matter】解密Matter协议(二)--- 关键概念及特性
    使用 SAP ABAP 代码生成 PDF 文件,填充以业务数据并显示在 SAPGUI 里
    Cannot resolve MVC view ‘xxx‘
  • 原文地址:https://blog.csdn.net/weixin_43701790/article/details/125506042