• Kafka基于Kraft下的权限控制


    Kafka基于Kraft下的权限控制

    本文基于kafka的版本 3.2.0, 之前的版本无法使用本文所提到的方法。

    本文方法对kafka源代码有修改
    修改部分如下(metadata\src\main\java\org\apache\kafka\metadata\authorizer\StandardAuthorizerData.java):

        void addAcl(Uuid id, StandardAcl acl) {
            try {
                StandardAcl prevAcl = aclsById.putIfAbsent(id, acl);
                if (prevAcl != null) {
                    log.warn("An ACL with ID " + id + " already exists.");
    //                throw new RuntimeException("An ACL with ID " + id + " already exists.");
                }
                else if (!aclsByResource.add(acl)) {
                    aclsById.remove(id);
                    log.warn("Unable to add the ACL with ID " + id +" from aclsByResource");
                    // throw new RuntimeException("Unable to add the ACL with ID " + id +
                    //     " to aclsByResource");
                }
                else if (log.isTraceEnabled()) {
                    log.trace("Added ACL " + id + ": " + acl);
                }
            } catch (Throwable e) {
                log.error("addAcl error", e);
     //           throw e;
            }
        }
    
        void removeAcl(Uuid id) {
            try {
                StandardAcl acl = aclsById.remove(id);
                if (acl == null) {
                    log.warn("ID " + id + " not found in aclsById.");
    //                throw new RuntimeException("ID " + id + " not found in aclsById.");
                }
                else if (!aclsByResource.remove(acl)) {
                    log.warn("Unable to remove the ACL with ID " + id +" from aclsByResource");
                   // throw new RuntimeException("Unable to remove the ACL with ID " + id +
                    //    " from aclsByResource");
                }
                else if (log.isTraceEnabled()) {
                    log.trace("Removed ACL " + id + ": " + acl);
                }
            } catch (Throwable e) {
                log.error("removeAcl error", e);
                //throw e;
            }
        }
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43

    实现作用是把抛出异常换为了输出警告,抛出异常的方式会导致kafka启动的时候无法正常启动,至于为什么kafka启动的时候要执行添加/删除 acl 的操作,暂时还不清楚。无法正常启动时出现的异常如下:

    Jul 28 15:29:06 kafka-server-start.sh[123334]: [2022-07-28 15:29:06,133] ERROR [StandardAuthorizer 1] addAcl error (org.apache.kafka.metadata.authorizer.Stand
    Jul 28 15:29:06 kafka-server-start.sh[123334]: java.lang.RuntimeException: An ACL with ID eK5n22NLQOeOHTT3gcnf7w already exists.
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at org.apache.kafka.metadata.authorizer.StandardAuthorizerData.addAcl(StandardAuthorizerData.java:169)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at org.apache.kafka.metadata.authorizer.StandardAuthorizer.addAcl(StandardAuthorizer.java:83)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$19(BrokerMetadataPublisher.scala:234)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at java.util.LinkedHashMap$LinkedEntrySet.forEach(LinkedHashMap.java:671)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$18(BrokerMetadataPublisher.scala:232)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$18$adapted(BrokerMetadataPublisher.scala:221)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at scala.Option.foreach(Option.scala:437)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataPublisher.publish(BrokerMetadataPublisher.scala:221)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataListener.kafka$server$metadata$BrokerMetadataListener$$publish(BrokerMet
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.$anonfun$run$2(BrokerMetadataListener.scala:
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.$anonfun$run$2$adapted(BrokerMetadataListene
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at scala.Option.foreach(Option.scala:437)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.run(BrokerMetadataListener.scala:119)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:121)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:200)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:173)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at java.lang.Thread.run(Thread.java:748)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: [2022-07-28 15:29:06,139] ERROR [BrokerMetadataPublisher id=1] Error publishing broker metadata at OffsetAndEpo
    Jul 28 15:29:06 kafka-server-start.sh[123334]: java.lang.RuntimeException: An ACL with ID eK5n22NLQOeOHTT3gcnf7w already exists.
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at org.apache.kafka.metadata.authorizer.StandardAuthorizerData.addAcl(StandardAuthorizerData.java:169)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at org.apache.kafka.metadata.authorizer.StandardAuthorizer.addAcl(StandardAuthorizer.java:83)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$19(BrokerMetadataPublisher.scala:234)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at java.util.LinkedHashMap$LinkedEntrySet.forEach(LinkedHashMap.java:671)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$18(BrokerMetadataPublisher.scala:232)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$18$adapted(BrokerMetadataPublisher.scala:221)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at scala.Option.foreach(Option.scala:437)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataPublisher.publish(BrokerMetadataPublisher.scala:221)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataListener.kafka$server$metadata$BrokerMetadataListener$$publish(BrokerMet
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.$anonfun$run$2(BrokerMetadataListener.scala:
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.$anonfun$run$2$adapted(BrokerMetadataListene
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at scala.Option.foreach(Option.scala:437)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.run(BrokerMetadataListener.scala:119)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:121)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:200)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:173)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at java.lang.Thread.run(Thread.java:748)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: [2022-07-28 15:29:06,143] ERROR [BrokerMetadataListener id=1] Unexpected error handling HandleCommitsEvent (kaf
    Jul 28 15:29:06 kafka-server-start.sh[123334]: java.lang.RuntimeException: An ACL with ID eK5n22NLQOeOHTT3gcnf7w already exists.
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at org.apache.kafka.metadata.authorizer.StandardAuthorizerData.addAcl(StandardAuthorizerData.java:169)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at org.apache.kafka.metadata.authorizer.StandardAuthorizer.addAcl(StandardAuthorizer.java:83)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$19(BrokerMetadataPublisher.scala:234)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at java.util.LinkedHashMap$LinkedEntrySet.forEach(LinkedHashMap.java:671)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$18(BrokerMetadataPublisher.scala:232)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataPublisher.$anonfun$publish$18$adapted(BrokerMetadataPublisher.scala:221)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at scala.Option.foreach(Option.scala:437)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataPublisher.publish(BrokerMetadataPublisher.scala:221)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataListener.kafka$server$metadata$BrokerMetadataListener$$publish(BrokerMet
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.$anonfun$run$2(BrokerMetadataListener.scala:
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.$anonfun$run$2$adapted(BrokerMetadataListene
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at scala.Option.foreach(Option.scala:437)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at kafka.server.metadata.BrokerMetadataListener$HandleCommitsEvent.run(BrokerMetadataListener.scala:119)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:121)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:200)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:173)
    Jul 28 15:29:06 kafka-server-start.sh[123334]: at java.lang.Thread.run(Thread.java:748)
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58

    安装

    1. 从官网上下载3.2.0的安装包 ,并解压
      下载地址: https://www.apache.org/dyn/closer.cgi?path=/kafka/3.2.0/kafka_2.13-3.2.0.tgz
    tar -xzf kafka_2.13-3.2.0.tgz
    cd kafka_2.13-3.2.0
    
    • 1
    • 2
    1. 替换kafka-metadata-3.2.0.jar

    基于上面提到的修改代码,重新构建后生成kafka-metadata-3.2.0.jar,替换掉libs/kafka-metadata-3.2.0.jar

    # 备份官方的 kafka-metadata-3.2.0.jar
    # 一定要把这个包从libs中拿出来
    mv libs/kafka-metadata-3.2.0.jar ./
    # 然后把自己build的jar包放进去
    mv /root/kafka-3.2.0-src/metadata/build/libs/kafka-metadata-3.2.0.jar/kafka-metadata-3.2.0.jar libs/kafka-metadata-3.2.0.jar
    
    • 1
    • 2
    • 3
    • 4
    • 5
    1. 修改配置文件

    config/kraft/server.properties:

    process.roles=broker,controller
    node.id=1
    # 修改这里,ip替换为实际ip
    controller.quorum.voters=1@<ip1>:9093,2@<ip2>:9093,3@<ip4>:9093
    # listeners 的PLAINTEXT要修改为SASL_PLAINTEXT
    listeners=SASL_PLAINTEXT://<ip1>:9092,CONTROLLER://<ip1>:9093
    # 这里也是PLAINTEXT要修改为SASL_PLAINTEXT
    inter.broker.listener.name=SASL_PLAINTEXT
    # 这里也是PLAINTEXT要修改为SASL_PLAINTEXT
    advertised.listeners=SASL_PLAINTEXT://<ip1>:9092
    controller.listener.names=CONTROLLER
    # 这里 CONTROLLER:PLAINTEXT修改为 CONTROLLER:SASL_PLAINTEXT
    listener.security.protocol.map=CONTROLLER:SASL_PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
    num.network.threads=3
    num.io.threads=8
    socket.send.buffer.bytes=102400
    socket.receive.buffer.bytes=102400
    socket.request.max.bytes=104857600
    # 这里,修改为要存放log的地方(实际存放的应该是kafka的数据,log在kafka安装目录的log文件夹下)
    log.dirs=/data/kafka_3.2.0/log
    num.partitions=1
    num.recovery.threads.per.data.dir=2
    offsets.topic.replication.factor=1
    transaction.state.log.replication.factor=1
    transaction.state.log.min.isr=1
    log.retention.hours=168
    log.segment.bytes=1073741824
    log.retention.check.interval.ms=300000
    # 认证方式,用了最简单的PLAIN,缺点是不能动态添加用户
    sasl.mechanism.inter.broker.protocol=PLAIN
    sasl.enabled.mechanisms=PLAIN
    sasl.mechanism=PLAIN
    # 禁用了自动创建topic
    auto.create.topics.enable = false
    # 设置必须授权才能用
    allow.everyone.if.no.acl.found=false
    # 设置超级管理员
    super.users=User:admin
    # 这个是3.2.0版本新引入的认证方式,可以参考 https://cwiki.apache.org/confluence/display/KAFKA/KIP-801%3A+Implement+an+Authorizer+that+stores+metadata+in+__cluster_metadata
    authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
    # 集群间认证时用的认证方式
    sasl.mechanism.controller.protocol=PLAIN
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43

    config/kraft/jaas.conf

    KafkaServer {
       org.apache.kafka.common.security.plain.PlainLoginModule required
       username="admin"
       password="password"
       user_admin="password"
       user_test="test";
    };
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • username/password 表示了认证时用的用户。
    • suer_admin=“password”,这个表示一个用户名为admin用户,密码是password,这个必须要有一个,且要这一个跟上面的username和password保持一致。
    • user_test=“test” 是第二个用户,表示的是用户名为test的账户,密码为test。

    service(/usr/lib/systemd/system/kafka.service)

    默认kafka的启动方式是通过命令行管理,这里做了一个service用于控制kafka的启动与停止,也作为守护进程。

    [Unit]
    Description=kafka server daemon
    
    [Service]
    Type=simple
    # 这里是指定了 jaas.conf文件,用于启用用户认证
    Environment="KAFKA_OPTS=-Djava.security.auth.login.config=/data/kafka_3.2.0/package/kafka_2.13-3.2.0/config/kraft/jaas.conf"
    # 启动命令
    ExecStart=/data/kafka_3.2.0/package/kafka_2.13-3.2.0/bin/kafka-server-start.sh /data/kafka_3.2.0/package/kafka_2.13-3.2.0/config/kraft/server.properties
    ExecReload=/bin/kill -HUP $MAINPID
    # 停止命令
    ExecStop=/data/kafka_3.2.0/package/kafka_2.13-3.2.0/bin/kafka-server-stop.sh
    KillMode=process
    Restart=on-failure
    RestartSec=42s
    
    [Install]
    WantedBy=multi-user.target
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    1. 生成集群clusterid
    ./bin/kafka-storage.sh random-uuid
    ./bin/kafka-storage.sh format -t <uuid> -c ./config/kraft/server.properties
    
    • 1
    • 2
    1. 启动kafka
    systemctl daemon-reload
    systemctl start kafka
    
    • 1
    • 2

    命令行中使用

    1. 先创建一个用于client的认证文件

    vim sasl.properties

    # 配置上一个用户
    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required  username="admin"  password="password";
    security.protocol=SASL_PLAINTEXT
    sasl.mechanism=PLAI
    
    • 1
    • 2
    • 3
    • 4

    执行命令式,后面都要带上 --command-config ./sasl.properties来进行用户认证
    2. 创建两个topic

    # 创建 topic create-for-test 
    bin/kafka-topics.sh --bootstrap-server localhost:9092  --create  --topic create-for-test --partitions 1 --replication-factor 1  --command-config ./sasl.properties
    # 创建 topic admin-create-test
    bin/kafka-topics.sh --bootstrap-server localhost:9092  --create  --topic admin-create-test --partitions 1 --replication-factor 1  --command-config ./sasl.properties
    # 查看topic
    bin/kafka-topics.sh --bootstrap-server localhost:9092 --list --command-config ./sasl.properties
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    1. 为topic create-for-test ,用test赋读权限
    bin/kafka-acls.sh  --bootstrap-server localhost:9092 --add --allow-principal User:test --operation Read --topic create-for-test --command-config ./sasl.properties
    
    • 1
    1. 切换到test用户,查看topic
    # 修改用户,把admin改成test
    vim sasl.properties
    # 查看所有topic,应该只能看到 create-for-test
    bin/kafka-topics.sh --bootstrap-server localhost:9092 --list --command-config ./sasl.properties
    
    • 1
    • 2
    • 3
    • 4

    java中使用

    package org.example;
    
    
    
    import org.apache.kafka.clients.CommonClientConfigs;
    import org.apache.kafka.clients.consumer.Consumer;
    import org.apache.kafka.clients.consumer.ConsumerConfig;
    import org.apache.kafka.clients.consumer.ConsumerRecord;
    import org.apache.kafka.clients.consumer.ConsumerRecords;
    import org.apache.kafka.clients.consumer.KafkaConsumer;
    import org.apache.kafka.common.config.SaslConfigs;
    import org.apache.kafka.common.security.auth.SecurityProtocol;
    import org.apache.kafka.common.serialization.StringDeserializer;
    import java.util.Properties;
    import java.util.Collections;
    import java.util.UUID;
    
    /**
     * Hello world!
     *
     */
    public class App 
    {
        public static void main( String[] args )
        {
            String username = "test";
            String password = "test";
            Properties props = new Properties();
            props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, ":9092");
            props.put(ConsumerConfig.GROUP_ID_CONFIG, UUID.randomUUID().toString());
            props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
            props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
            props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 1);
            props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
            props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
            // 这里配置认证协议
            props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
            // 认证方式
            props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
            // 认证用户
            String saslJaasConfig = String.format("org.apache.kafka.common.security.plain.PlainLoginModule required \nusername=\"%s\" \npassword=\"%s\";", username, password);
            props.put(SaslConfigs.SASL_JAAS_CONFIG, saslJaasConfig);
    
            Consumer<String, String> consumer = new KafkaConsumer<>(props);
            System.out.printf(consumer.listTopics().toString());
            consumer.close();
        }
    }
    
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
  • 相关阅读:
    通过SIMULINK实现飞轮储能系统,对风力发电场的功率波动进行补偿,改善故障时的电压跌落并可以抑制母线电流谐波
    Linux简单命令之文件权限管理
    [附源码]java毕业设计基于JavaEE的在线学习平台
    Impala查找指定字符位置instr
    【2020】【论文笔记】超表面:多功能和可编程——
    【16】c++11新特性 —>弱引用智能指针weak_ptr(1)
    (三)什么是Vite——Vite 主体流程(运行npm run dev后发生了什么?)
    阿里、字节等神创,必须是全网最全的Netty核心原理手册
    阿里云国际站和华为云国际站之间该如何选择?
    使用Docker辅助图像识别程序开发:在Docker中显示GUI、访问GPU、USB相机以及网络
  • 原文地址:https://blog.csdn.net/s7799653/article/details/126095392