• Oracle 19c RAC installation on centos7.5


     

    1. Node磁盘组规划

    规划3个ASM磁盘组,每个磁盘组包含一至多块磁盘,其中/dev/sda为系统盘,ASM磁盘规划如下:
    磁盘组名称    磁盘名称    磁盘大小    ASM磁盘名称    作用    冗余
    OCR    /dev/sdb    10G     asm-ocr1    OCR/Voting File           EXTERNAL
    DATA    /dev/sdc    40G    asm-data1    Data Files                   EXTERNAL
    FRA    /dev/sdd    20G      asm-fra1      Fast Recovery area    EXTERNAL
     

    磁盘组说明:

    • OCR:OCR和表决盘(OCR/Voting File)
    • DATA:数据盘(Data Files)
    • FRA:归档和快速恢复区(Fast Recovery area)
    1. D:\software\centos\sharedisks>
    2. D:\software\centos\sharedisks>"E:\Soft\BIGDATA\Centos\VM\VmvareWorkstation\vmware-vdiskmanager.exe" -c -s 10GB -t 4 sharedisk01.vmdk
    3. Creating disk 'sharedisk01.vmdk'
    4. Create: 100% done.
    5. Virtual disk creation successful.
    6. D:\software\centos\sharedisks>"E:\Soft\BIGDATA\Centos\VM\VmvareWorkstation\vmware-vdiskmanager.exe" -c -s 40GB -t 4 sharedisk02.vmdk
    7. Creating disk 'sharedisk02.vmdk'
    8. Create: 100% done.
    9. Virtual disk creation successful.
    10. D:\software\centos\sharedisks>"E:\Soft\BIGDATA\Centos\VM\VmvareWorkstation\vmware-vdiskmanager.exe" -c -s 20GB -t 4 sharedisk03.vmdk
    11. Creating disk 'sharedisk03.vmdk'
    12. Create: 100% done.
    13. Virtual disk creation successful.
    14. D:\software\centos\sharedisks>

     文件中末尾加入以下内容后,才可以正常打开虚拟机:

    1. disk.locking = "false"
    2. scsi1.shareBus = "VIRTUAL"
    3. disk.EnableUUID = "TRUE"

    查看磁盘信息:

    Node1:

    1. [root@node1 ~]#
    2. [root@node1 ~]# lsscsi
    3. [0:0:0:0] disk VMware, VMware Virtual S 1.0 /dev/sda
    4. [0:0:1:0] disk VMware, VMware Virtual S 1.0 /dev/sdb
    5. [0:0:2:0] disk VMware, VMware Virtual S 1.0 /dev/sdc
    6. [0:0:3:0] disk VMware, VMware Virtual S 1.0 /dev/sdd
    7. [2:0:0:0] cd/dvd NECVMWar VMware IDE CDR10 1.00 /dev/sr0
    8. [root@node1 ~]#

    Node2 :

    1. [root@node2 ~]#
    2. [root@node2 ~]# lsscsi
    3. [0:0:0:0] disk VMware, VMware Virtual S 1.0 /dev/sda
    4. [0:0:1:0] disk VMware, VMware Virtual S 1.0 /dev/sdb
    5. [0:0:2:0] disk VMware, VMware Virtual S 1.0 /dev/sdc
    6. [0:0:3:0] disk VMware, VMware Virtual S 1.0 /dev/sdd
    7. [2:0:0:0] cd/dvd NECVMWar VMware IDE CDR10 1.00 /dev/sr0
    8. [root@node2 ~]#

    IP网络设置

    Node1 IP网络设置:

    1. [root@node1 network-scripts]# cat ifcfg-ens33
    2. TYPE="Ethernet"
    3. PROXY_METHOD="none"
    4. BROWSER_ONLY="no"
    5. BOOTPROTO="static"
    6. DEFROUTE="yes"
    7. IPV4_FAILURE_FATAL="no"
    8. IPV6INIT="yes"
    9. IPV6_AUTOCONF="yes"
    10. IPV6_DEFROUTE="yes"
    11. IPV6_FAILURE_FATAL="no"
    12. IPV6_ADDR_GEN_MODE="stable-privacy"
    13. NAME="ens33"
    14. UUID="160dfb25-535e-49b1-ac14-791c0817279a"
    15. DEVICE="ens33"
    16. ONBOOT="yes"
    17. IPADDR=192.168.8.111
    18. GATEWAY=192.168.8.2
    19. DNS1=192.168.8.2
    20. DNS2=8.8.8.8
    21. [root@node1 network-scripts]# cat ifcfg-ens34
    22. TYPE=Ethernet
    23. PROXY_METHOD=none
    24. BROWSER_ONLY=no
    25. BOOTPROTO=static
    26. DEFROUTE=yes
    27. IPV4_FAILURE_FATAL=no
    28. IPV6INIT=yes
    29. IPV6_AUTOCONF=yes
    30. IPV6_DEFROUTE=yes
    31. IPV6_FAILURE_FATAL=no
    32. IPV6_ADDR_GEN_MODE=stable-privacy
    33. NAME=ens34
    34. UUID=7bc7fda1-d92c-42f8-b276-28d69df67b5c
    35. DEVICE=ens34
    36. ONBOOT=yes
    37. IPADDR=192.168.100.111
    38. GATEWAY=192.168.100.2
    39. DNS1=192.168.100.2
    40. DNS2=8.8.8.8
    41. [root@node1 network-scripts]#

    Node2 IP网络设置:

    1. [root@node2 network-scripts]# cat ifcfg-ens33
    2. TYPE="Ethernet"
    3. PROXY_METHOD="none"
    4. BROWSER_ONLY="no"
    5. BOOTPROTO="static"
    6. DEFROUTE="yes"
    7. IPV4_FAILURE_FATAL="no"
    8. IPV6INIT="yes"
    9. IPV6_AUTOCONF="yes"
    10. IPV6_DEFROUTE="yes"
    11. IPV6_FAILURE_FATAL="no"
    12. IPV6_ADDR_GEN_MODE="stable-privacy"
    13. NAME="ens33"
    14. UUID="afc93bb0-6ab3-4fda-a01b-65e5b4ce1e91"
    15. DEVICE="ens33"
    16. ONBOOT="yes"
    17. IPADDR=192.168.8.112
    18. GATEWAY=192.168.8.2
    19. DNS1=192.168.8.2
    20. DNS2=8.8.8.8
    21. [root@node2 network-scripts]# cat ifcfg-ens34
    22. TYPE=Ethernet
    23. PROXY_METHOD=none
    24. BROWSER_ONLY=no
    25. BOOTPROTO=static
    26. DEFROUTE=yes
    27. IPV4_FAILURE_FATAL=no
    28. IPV6INIT=yes
    29. IPV6_AUTOCONF=yes
    30. IPV6_DEFROUTE=yes
    31. IPV6_FAILURE_FATAL=no
    32. IPV6_ADDR_GEN_MODE=stable-privacy
    33. NAME=ens34
    34. UUID=f0182ece-4d8f-4ec4-b7ef-8263da0d7fbc
    35. DEVICE=ens34
    36. ONBOOT=yes
    37. IPADDR=192.168.100.112
    38. GATEWAY=192.168.100.2
    39. DNS1=192.168.100.2
    40. DNS2=8.8.8.8
    41. [root@node2 network-scripts]#

    配置主机名

    1. # node1
    2. hostnamectl set-hostname node1
    3. # node2
    4. hostnamectl set-hostname node2

    配置/etc/hosts

    1. [oracle@node1 ~]$ cat /etc/hosts
    2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
    4. # Public
    5. 192.168.8.111 node1 node1.racdb.local
    6. 192.168.8.112 node2 node2.racdb.local
    7. # Private
    8. 192.168.100.111 node1-priv node1-priv.racdb.local
    9. 192.168.100.112 node2-priv node2-priv.racdb.local
    10. # Virtual
    11. 192.168.8.113 node1-vip node1-vip.racdb.local
    12. 192.168.8.114 node2-vip node2-vip.racdb.local
    13. # SCAN
    14. 192.168.8.115 node-cluster-scan node-cluster-scan.racdb.local
    15. 192.168.8.116 node-cluster-scan node-cluster-scan.racdb.local
    16. 192.168.8.117 node-cluster-scan node-cluster-scan.racdb.local
    17. [oracle@node1 ~]$

    配置时间同步

    使用chrony从公网同步时间(由于拥有CTSS进行时间同步,此步骤无需操作

    1. yum install -y chrony
    2. systemctl enable --now chronyd

    由于有Oracle Cluster Time Synchronization Service (CTSS)同步时间 ,需要将 NTP和chronyd都禁言。
     

    1. rm -rf /var/run/chronyd.pid
    2. rm -rf /etc/chrony.conf
    3. rm -rf /etc/ntp.conf
    4. systemctl stop ntpd.service
    5. systemctl disable ntpd
    6. systemctl stop chronyd.service
    7. systemctl disable chronyd.service
    8. ----------------------------------------------
    9. # 检查集群是否存在第三方时间同步服务
    10. cluvfy comp clocksync -n all
    11. # 检查ctss的状态
    12. [grid@node1 ~]$ crsctl check ctss
    13. CRS-4700: The Cluster Time Synchronization Service is in Observer mode
    14. # 删除第三方时间同步配置文件
    15. rm -rf /etc/ntp.conf
    16. rm -rf /etc/chrony.conf
    17. rm -rf /var/run/chronyd.pid
    18. # 过大概半分钟 就可以看到状态变成了active模式
    19. [grid@node1 ~]$ crsctl check ctss
    20. CRS-4701: The Cluster Time Synchronization Service is in Active mode.
    21. CRS-4702: Offset (in msec): 0
    22. # 再次检查集群时间同步
    23. [grid@node1 ~]$ cluvfy comp clocksync -n all
    24. Verifying Clock Synchronization ...PASSED
    25. Verification of Clock Synchronization across the cluster nodes was successful.
    26. CVU operation performed: Clock Synchronization across the cluster nodes
    27. Date: Nov 9, 2022 9:12:41 PM
    28. CVU home: /u01/app/19.3.0/grid/
    29. User: grid
    30. [grid@node1 ~]$

    配置selinux及防火墙

    关闭selinux

    1. getenforce ##先检查是否是“Disabled”,如果不是,执行下列操作
    2. sed -i 's/=enforcing/=disabled/' /etc/selinux/config ##如果enforcing是permissive也需要修改成disabled
    3. 重启OS后执行getenforce查看时候是“Disabled”

    关闭firewalld防火墙

    systemctl disable --now firewalld

    [root@node1 ~]#
    [root@node1 ~]# systemctl disable --now firewalld
    Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
    Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    [root@node1 ~]#

    [root@node2 ~]#
    [root@node2 ~]# systemctl disable --now firewalld
    Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
    Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    [root@node2 ~]#

    安装依赖包

    1. yum groupinstall -y "Server with GUI"
    2. yum install -y bc \
    3. binutils \
    4. compat-libcap1 \
    5. compat-libstdc++-33 \
    6. gcc \
    7. gcc-c++ \
    8. elfutils-libelf \
    9. elfutils-libelf-devel \
    10. glibc \
    11. glibc-devel \
    12. ksh \
    13. libaio \
    14. libaio-devel \
    15. libgcc \
    16. libstdc++ \
    17. libstdc++-devel \
    18. libxcb \
    19. libX11 \
    20. libXau \
    21. libXi \
    22. libXtst \
    23. libXrender \
    24. libXrender-devel \
    25. make \
    26. net-tools \
    27. nfs-utils \
    28. smartmontools \
    29. sysstat \
    30. e2fsprogs \
    31. e2fsprogs-libs \
    32. fontconfig-devel \
    33. expect \
    34. unzip \
    35. openssh-clients \
    36. readline* \
    37. tigervnc* \
    38. psmisc --skip-broken

     检查依赖包安装情况:

    rpm -q bc binutils compat-libcap1 compat-libstdc++-33 gcc gcc-c++ elfutils-libelf elfutils-libelf-devel glibc glibc-devel ksh libaio libaio-devel libgcc libstdc++ libstdc++-devel libxcb libX11 libXau libXi libXtst libXrender libXrender-devel make net-tools nfs-utils smartmontools sysstat e2fsprogs e2fsprogs-libs fontconfig-devel expect unzip openssh-clients readline | grep "not installed"

    创建相关用户组

    创建用户及组

    1. # 创建用户组
    2. groupadd -g 54321 oinstall
    3. groupadd -g 54322 dba
    4. groupadd -g 54323 oper
    5. groupadd -g 54324 backupdba
    6. groupadd -g 54325 dgdba
    7. groupadd -g 54326 kmdba
    8. groupadd -g 54327 asmdba
    9. groupadd -g 54328 asmoper
    10. groupadd -g 54329 asmadmin
    11. groupadd -g 54330 racdba
    12. # 创建用户并加入组
    13. useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba,racdba,oper oracle
    14. useradd -u 54331 -g oinstall -G dba,asmdba,asmoper,asmadmin,racdba grid
    15. # 设置用户密码
    16. echo "oracle" | passwd oracle --stdin
    17. echo "grid" | passwd grid --stdin

    创建相应目录

    1. mkdir -p /u01/app/19.3.0/grid
    2. mkdir -p /u01/app/grid
    3. mkdir -p /u01/app/oracle/product/19.3.0/dbhome_1
    4. chown -R grid:oinstall /u01
    5. chown -R oracle:oinstall /u01/app/oracle
    6. chmod -R 775 /u01/

    使能shmem

    1. cat >>/etc/fstab<<EOF
    2. tmpfs /dev/shm tmpfs defaults,size=8G 0 0
    3. EOF

    配置NOZEROCONF

    1. cat >>/etc/sysconfig/network<<EOF
    2. NOZEROCONF=yes
    3. EOF

    登录配置

    1. cat >>/etc/pam.d/login<<EOF
    2. #ORACLE SETTING
    3. session required pam_limits.so
    4. EOF

    配置内核参数

    1. # 配置以下内核参数
    2. cat >/etc/sysctl.d/97-oracledatabase-sysctl.conf<<EOF
    3. fs.aio-max-nr = 1048576
    4. fs.file-max = 6815744
    5. kernel.shmall = 2097152
    6. kernel.shmmax = 4294967295
    7. kernel.shmmni = 4096
    8. kernel.sem = 250 32000 100 128
    9. net.ipv4.ip_local_port_range = 9000 65500
    10. net.core.rmem_default = 262144
    11. net.core.rmem_max = 4194304
    12. net.core.wmem_default = 262144
    13. net.core.wmem_max = 1048576
    14. EOF
    15. # 使配置生效
    16. sysctl --system

    为用户设置安全限制

    为oracle及grid用户配置安全限制

    1. cat >/etc/security/limits.d/30-oracle.conf<<EOF
    2. grid soft nproc 16384
    3. grid hard nproc 16384
    4. grid soft nofile 1024
    5. grid hard nofile 65536
    6. grid soft stack 10240
    7. grid hard stack 32768
    8. oracle soft nproc 16384
    9. oracle hard nproc 16384
    10. oracle soft nofile 1024
    11. oracle hard nofile 65536
    12. oracle soft stack 10240
    13. oracle hard stack 32768
    14. oracle hard memlock 3145728
    15. oracle soft memlock 3145728
    16. EOF
    17. cat>>/etc/security/limits.d/20-nproc.conf<<EOF
    18. * - nproc 16384
    19. EOF

    修改用户profile

    注意修改ORACLE_HOSTNAME及ORACLE_SID变量,node1节点与node2节点不同。

    其中grid用户配置,节点1的ORACLE_SID=+ASM1,节点2的ORACLE_SID=+ASM2。

    node1节点配置

    1. # grid用户
    2. cat>>/home/grid/.bash_profile<<'EOF'
    3. # oracle grid
    4. export TMP=/tmp
    5. export TMPDIR=$TMP
    6. export ORACLE_HOSTNAME=node1.racdb.local
    7. export ORACLE_BASE=/u01/app/grid
    8. export ORACLE_HOME=/u01/app/19.3.0/grid
    9. export ORACLE_SID=+ASM1
    10. export ORACLE_TERM=xterm
    11. export PATH=$ORACLE_HOME/bin:$PATH
    12. export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    13. export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
    14. EOF
    15. # oracle用户
    16. cat>>/home/oracle/.bash_profile<<'EOF'
    17. # oracle
    18. export TMP=/tmp
    19. export TMPDIR=$TMP
    20. export ORACLE_HOSTNAME=node1.racdb.local
    21. export ORACLE_UNQNAME=racdb
    22. export ORACLE_BASE=/u01/app/oracle
    23. export ORACLE_HOME=$ORACLE_BASE/product/19.3.0/dbhome_1
    24. export ORACLE_SID=racdb1
    25. export ORACLE_TERM=xterm
    26. export PATH=$ORACLE_HOME/bin:$PATH
    27. export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    28. export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
    29. EOF

    node2节点配置

    1. # grid用户
    2. cat>>/home/grid/.bash_profile<<'EOF'
    3. # oracle grid
    4. export TMP=/tmp
    5. export TMPDIR=$TMP
    6. export ORACLE_HOSTNAME=node2.racdb.local
    7. export ORACLE_BASE=/u01/app/grid
    8. export ORACLE_HOME=/u01/app/19.3.0/grid
    9. export ORACLE_SID=+ASM2
    10. export ORACLE_TERM=xterm
    11. export PATH=$ORACLE_HOME/bin:$PATH
    12. export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    13. export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
    14. EOF
    15. # oracle用户
    16. cat>>/home/oracle/.bash_profile<<'EOF'
    17. # oracle
    18. export TMP=/tmp
    19. export TMPDIR=$TMP
    20. export ORACLE_HOSTNAME=node2.racdb.local
    21. export ORACLE_UNQNAME=racdb
    22. export ORACLE_BASE=/u01/app/oracle
    23. export ORACLE_HOME=$ORACLE_BASE/product/19.3.0/dbhome_1
    24. export ORACLE_SID=racdb2
    25. export ORACLE_TERM=xterm
    26. export PATH=$ORACLE_HOME/bin:$PATH
    27. export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    28. export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
    29. EOF

    节点免密配置

    节点node1配置:

    1. su - grid
    2. ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
    3. ssh-copy-id node2
    4. su - oracle
    5. ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
    6. ssh-copy-id node2

    节点node2配置:

    1. su - grid
    2. ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
    3. ssh-copy-id node1
    4. su - oracle
    5. ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
    6. ssh-copy-id node1

    配置/etc/sysconfig/network-scripts目录下的网卡配置文件,加上下面参数。

    两台机配好IP并重启网络服务后,将自动绑定划分的两块网卡。

    HOTPLUG="no"

    关闭Transparent HugePages

    1. # 创建systemd文件
    2. cat > /etc/systemd/system/disable-thp.service <<EOF
    3. [Unit]
    4. Description=Disable Transparent Huge Pages (THP)
    5. [Service]
    6. Type=simple
    7. ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"
    8. [Install]
    9. WantedBy=multi-user.target
    10. EOF
    11. # 启动服务
    12. systemctl enable --now disable-thp

    [root@node1 ~]#
    [root@node1 ~]# cat > /etc/systemd/system/disable-thp.service < >
    > [Unit]
    > Description=Disable Transparent Huge Pages (THP)
    >
    > [Service]
    > Type=simple
    > ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"
    >
    > [Install]
    > WantedBy=multi-user.target
    > EOF
    [root@node1 ~]# systemctl enable --now disable-thp
    Created symlink from /etc/systemd/system/multi-user.target.wants/disable-thp.service to /etc/systemd/system/disable-thp.service.
    [root@node1 ~]#

    [root@node2 ~]#
    [root@node2 ~]# cat > /etc/systemd/system/disable-thp.service < >
    > [Unit]
    > Description=Disable Transparent Huge Pages (THP)
    >
    > [Service]
    > Type=simple
    > ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"
    >
    > [Install]
    > WantedBy=multi-user.target
    > EOF
    [root@node2 ~]# systemctl enable --now disable-thp
    Created symlink from /etc/systemd/system/multi-user.target.wants/disable-thp.service to /etc/systemd/system/disable-thp.service.
    [root@node2 ~]#

    配置共享存储 (注意安装两个包之后先reboot再初期化)

    1. yum install -y kmod-oracleasm
    2. wget https://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.12-1.el7.x86_64.rpm
    3. wget https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracleasm-support-2.1.11-2.el7.x86_64.rpm
    4. yum -y localinstall oracleasmlib-2.0.12-1.el7.x86_64.rpm
    5. yum -y localinstall oracleasm-support-2.1.11-2.el7.x86_64.rpm
    6. #初始化
    7. oracleasm init
    8. #修改配置
    9. oracleasm configure -e -u grid -g asmadmin

    查看配置:

    1. [root@node1 ~]# oracleasm configure
    2. ORACLEASM_ENABLED=true
    3. ORACLEASM_UID=grid
    4. ORACLEASM_GID=asmadmin
    5. ORACLEASM_SCANBOOT=true
    6. ORACLEASM_SCANORDER=""
    7. ORACLEASM_SCANEXCLUDE=""
    8. ORACLEASM_SCAN_DIRECTORIES=""
    9. ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"
    10. [root@node1 ~]#
    1. [root@node2 ~]# oracleasm configure
    2. ORACLEASM_ENABLED=true
    3. ORACLEASM_UID=grid
    4. ORACLEASM_GID=asmadmin
    5. ORACLEASM_SCANBOOT=true
    6. ORACLEASM_SCANORDER=""
    7. ORACLEASM_SCANEXCLUDE=""
    8. ORACLEASM_SCAN_DIRECTORIES=""
    9. ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"
    10. [root@node2 ~]#

    仅在node1节点执行,进行磁盘分区:

    1. parted /dev/sdb -s -- mklabel gpt mkpart primary 1 -1
    2. parted /dev/sdc -s -- mklabel gpt mkpart primary 1 -1
    3. parted /dev/sdd -s -- mklabel gpt mkpart primary 1 -1

    [root@node1 ~]#
    [root@node1 ~]# parted /dev/sdb -s -- mklabel gpt mkpart primary 1 -1
    [root@node1 ~]# parted /dev/sdc -s -- mklabel gpt mkpart primary 1 -1
    [root@node1 ~]# parted /dev/sdd -s -- mklabel gpt mkpart primary 1 -1
    [root@node1 ~]#

    确认分区情况

    1. [root@node1 ~]#
    2. [root@node1 ~]# lsblk
    3. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    4. sda 8:0 0 80G 0 disk
    5. ├─sda1 8:1 0 1G 0 part /boot
    6. └─sda2 8:2 0 79G 0 part
    7. ├─centos-root 253:0 0 47.8G 0 lvm /
    8. ├─centos-swap 253:1 0 7.9G 0 lvm [SWAP]
    9. └─centos-home 253:2 0 23.3G 0 lvm /home
    10. sdb 8:16 0 10G 0 disk
    11. └─sdb1 8:17 0 10G 0 part
    12. sdc 8:32 0 40G 0 disk
    13. └─sdc1 8:33 0 40G 0 part
    14. sdd 8:48 0 20G 0 disk
    15. └─sdd1 8:49 0 20G 0 part
    16. sr0 11:0 1 1024M 0 rom
    17. [root@node1 ~]#

    仅在node1节点执行,使用oracleasm创建磁盘,根据你的实际盘符名:

    1. oracleasm createdisk OCR1 /dev/sdb1
    2. oracleasm createdisk DATA1 /dev/sdc1
    3. oracleasm createdisk FRA1 /dev/sdd1

    [root@node1 ~]#
    [root@node1 ~]# oracleasm createdisk OCR1 /dev/sdb1
    Writing disk header: done
    Instantiating disk: done
    [root@node1 ~]# oracleasm createdisk DATA1 /dev/sdc1
    Writing disk header: done
    Instantiating disk: done
    [root@node1 ~]# oracleasm createdisk FRA1 /dev/sdd1
    Writing disk header: done
    Instantiating disk: done
    [root@node1 ~]#

    在所有节点执行

    1. oracleasm scandisks
    2. oracleasm listdisks

    [root@node1 ~]#
    [root@node1 ~]# oracleasm scandisks
    Reloading disk partitions: done
    Cleaning any stale ASM disks...
    Scanning system for ASM disks...
    [root@node1 ~]#
    [root@node1 ~]#
    [root@node1 ~]# oracleasm listdisks
    DATA1
    FRA1
    OCR1
    [root@node1 ~]#

    [root@node2 ~]#
    [root@node2 ~]# oracleasm scandisks
    Reloading disk partitions: done
    Cleaning any stale ASM disks...
    Scanning system for ASM disks...
    Instantiating disk "OCR1"
    Instantiating disk "DATA1"
    Instantiating disk "FRA1"
    [root@node2 ~]#
    [root@node2 ~]#
    [root@node2 ~]# oracleasm listdisks
    DATA1
    FRA1
    OCR1
    [root@node2 ~]#

    查看磁盘设备

    1. [root@node1 ~]#
    2. [root@node1 ~]# ls -l /dev/oracleasm/disks
    3. total 0
    4. brw-rw----. 1 grid asmadmin 8, 33 Nov 8 16:37 DATA1
    5. brw-rw----. 1 grid asmadmin 8, 49 Nov 8 16:37 FRA1
    6. brw-rw----. 1 grid asmadmin 8, 17 Nov 8 16:37 OCR1
    7. [root@node1 ~]#
    8. [root@node2 ~]#
    9. [root@node2 ~]# ls -l /dev/oracleasm/disks
    10. total 0
    11. brw-rw----. 1 grid asmadmin 8, 33 Nov 8 16:39 DATA1
    12. brw-rw----. 1 grid asmadmin 8, 49 Nov 8 16:39 FRA1
    13. brw-rw----. 1 grid asmadmin 8, 17 Nov 8 16:39 OCR1
    14. [root@node2 ~]#

    开始安装GRID

    在第一个节点node1执行。

    使用ssh登陆到grid用户,将下载好的安装包LINUX.X64_193000_grid_home.zip上传到$GRID_HOME目录。

    解压到$ORACLE_HOME目录下

    [grid@node1 ~]$ unzip LINUX.X64_193000_grid_home.zip -d $ORACLE_HOME
    

    将cvuqdisk rpm包复制到集群上的每个节点

    scp $ORACLE_HOME/cv/rpm/cvuqdisk-1.0.10-1.rpm root@node2:/tmp
    

    切换回root用户安装cvuqdisk rpm包。

    1. # node1
    2. CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
    3. rpm -iv /u01/app/19.3.0/grid/cv/rpm/cvuqdisk-1.0.10-1.rpm
    4. # node2
    5. CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
    6. rpm -iv /tmp/cvuqdisk-1.0.10-1.rpm

    由于使用最小安装的操作系统,若是无图形界面,则需要在node1节点安装xorg-x11(其实此包系统已经存在),并在windows中安装xming以调用GUI界面:

    1. yum install -y xorg-x11-xinit
    2. # 重新登录会话生效
    3. exit

    安装windows 版本的Xming

    Xming X Server for Windows download | SourceForge.net

    SecureCRT 中勾选如图所示:

    windows中下载安装Xming Server,直接启动即可,SecureCRT将转发图形界面到Xming Server显示。

    node1节点以grid用户身份登陆,转到ORACLE_HOME目录

    [grid@racdb1:/home/grid]$ cd $ORACLE_HOME
    

     在node1命令行界面执行以下命令开始安装grid

    ./gridSetup.sh
    

     

     

     

     

     

     

     

     

     

     解决PRVG-13602问题

    1. [root@node1 ~]# systemctl status ntpd.service
    2. ● ntpd.service - Network Time Service
    3. Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
    4. Active: inactive (dead)
    5. Nov 09 07:44:54 node1 ntpd[859]: Listen normally on 7 ens34 192.168.100.111 UDP 123
    6. Nov 09 07:44:54 node1 ntpd[859]: Listen normally on 8 virbr0 192.168.122.1 UDP 123
    7. Nov 09 07:44:54 node1 ntpd[859]: 193.182.111.14 interface 192.168.8.111 -> 192.168.100.111
    8. Nov 09 07:44:54 node1 ntpd[859]: 185.209.85.222 interface 192.168.8.111 -> 192.168.100.111
    9. Nov 09 07:44:54 node1 ntpd[859]: 194.58.203.148 interface 192.168.8.111 -> 192.168.100.111
    10. Nov 09 07:44:54 node1 ntpd[859]: 116.203.151.74 interface 192.168.8.111 -> 192.168.100.111
    11. Nov 09 07:44:54 node1 ntpd[859]: new interface(s) found: waking up resolver
    12. Nov 09 08:01:32 node1 ntpd[859]: ntpd exiting on signal 15
    13. Nov 09 08:01:32 node1 systemd[1]: Stopping Network Time Service...
    14. Nov 09 08:01:32 node1 systemd[1]: Stopped Network Time Service.
    15. [root@node1 ~]# systemctl status chronyd
    16. ● chronyd.service - NTP client/server
    17. Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled; vendor preset: enabled)
    18. Active: inactive (dead)
    19. Docs: man:chronyd(8)
    20. man:chrony.conf(5)
    21. [root@node1 ~]#
    22. [root@node1 ~]# systemctl enable --now chronyd
    23. Created symlink from /etc/systemd/system/multi-user.target.wants/chronyd.service to /usr/lib/systemd/system/chronyd.service.
    24. [root@node1 ~]# systemctl status chronyd
    25. ● chronyd.service - NTP client/server
    26. Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
    27. Active: active (running) since Wed 2022-11-09 08:04:42 CST; 2s ago
    28. Docs: man:chronyd(8)
    29. man:chrony.conf(5)
    30. Process: 4314 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
    31. Process: 4310 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
    32. Main PID: 4312 (chronyd)
    33. Tasks: 1
    34. CGroup: /system.slice/chronyd.service
    35. └─4312 /usr/sbin/chronyd
    36. Nov 09 08:04:41 node1 systemd[1]: Starting NTP client/server...
    37. Nov 09 08:04:42 node1 chronyd[4312]: chronyd version 3.4 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +SECHASH +IPV6 +DEBUG)
    38. Nov 09 08:04:42 node1 chronyd[4312]: Frequency 0.384 +/- 3.150 ppm read from /var/lib/chrony/drift
    39. Nov 09 08:04:42 node1 systemd[1]: Started NTP client/server.
    40. [root@node1 ~]#

     点击yes,自动执行脚本。

    忽略以上报错,完成安装 INS-20802 Oracle Cluster Verification Utility Failed

    获取特定资源的状态和配置信息

    1. [grid@node1 grid]$
    2. [grid@node1 grid]$ crsctl status resource -t
    3. --------------------------------------------------------------------------------
    4. Name Target State Server State details
    5. --------------------------------------------------------------------------------
    6. Local Resources
    7. --------------------------------------------------------------------------------
    8. ora.LISTENER.lsnr
    9. ONLINE ONLINE node1 STABLE
    10. ONLINE ONLINE node2 STABLE
    11. ora.chad
    12. ONLINE ONLINE node1 STABLE
    13. ONLINE ONLINE node2 STABLE
    14. ora.net1.network
    15. ONLINE ONLINE node1 STABLE
    16. ONLINE ONLINE node2 STABLE
    17. ora.ons
    18. ONLINE ONLINE node1 STABLE
    19. ONLINE ONLINE node2 STABLE
    20. --------------------------------------------------------------------------------
    21. Cluster Resources
    22. --------------------------------------------------------------------------------
    23. ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
    24. 1 ONLINE ONLINE node1 STABLE
    25. 2 ONLINE ONLINE node2 STABLE
    26. 3 OFFLINE OFFLINE STABLE
    27. ora.LISTENER_SCAN1.lsnr
    28. 1 ONLINE ONLINE node2 STABLE
    29. ora.LISTENER_SCAN2.lsnr
    30. 1 ONLINE ONLINE node1 STABLE
    31. ora.LISTENER_SCAN3.lsnr
    32. 1 ONLINE ONLINE node1 STABLE
    33. ora.OCR.dg(ora.asmgroup)
    34. 1 ONLINE ONLINE node1 STABLE
    35. 2 ONLINE ONLINE node2 STABLE
    36. 3 OFFLINE OFFLINE STABLE
    37. ora.asm(ora.asmgroup)
    38. 1 ONLINE ONLINE node1 Started,STABLE
    39. 2 ONLINE ONLINE node2 Started,STABLE
    40. 3 OFFLINE OFFLINE STABLE
    41. ora.asmnet1.asmnetwork(ora.asmgroup)
    42. 1 ONLINE ONLINE node1 STABLE
    43. 2 ONLINE ONLINE node2 STABLE
    44. 3 OFFLINE OFFLINE STABLE
    45. ora.cvu
    46. 1 ONLINE ONLINE node1 STABLE
    47. ora.node1.vip
    48. 1 ONLINE ONLINE node1 STABLE
    49. ora.node2.vip
    50. 1 ONLINE ONLINE node2 STABLE
    51. ora.qosmserver
    52. 1 ONLINE ONLINE node1 STABLE
    53. ora.scan1.vip
    54. 1 ONLINE ONLINE node2 STABLE
    55. ora.scan2.vip
    56. 1 ONLINE ONLINE node1 STABLE
    57. ora.scan3.vip
    58. 1 ONLINE ONLINE node1 STABLE
    59. --------------------------------------------------------------------------------
    60. [grid@node1 grid]$

    检查本地服务器上的 Oracle High Availability Services 和 Oracle Clusterware 堆栈的状态

    1. [grid@node1 grid]$ crsctl check crs
    2. CRS-4638: Oracle High Availability Services is online
    3. CRS-4537: Cluster Ready Services is online
    4. CRS-4529: Cluster Synchronization Services is online
    5. CRS-4533: Event Manager is online
    6. [grid@node1 grid]$

    查看node1 IP信息

    1. [grid@node1 grid]$
    2. [grid@node1 grid]$ ip a
    3. 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    4. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    5. inet 127.0.0.1/8 scope host lo
    6. valid_lft forever preferred_lft forever
    7. inet6 ::1/128 scope host
    8. valid_lft forever preferred_lft forever
    9. 2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    10. link/ether 00:0c:29:38:85:6b brd ff:ff:ff:ff:ff:ff
    11. inet 192.168.8.111/24 brd 192.168.8.255 scope global ens33
    12. valid_lft forever preferred_lft forever
    13. inet 192.168.8.113/24 brd 192.168.8.255 scope global secondary ens33:1
    14. valid_lft forever preferred_lft forever
    15. inet 192.168.8.116/24 brd 192.168.8.255 scope global secondary ens33:3
    16. valid_lft forever preferred_lft forever
    17. inet 192.168.8.117/24 brd 192.168.8.255 scope global secondary ens33:4
    18. valid_lft forever preferred_lft forever
    19. inet6 fe80::20c:29ff:fe38:856b/64 scope link
    20. valid_lft forever preferred_lft forever
    21. 3: ens34: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    22. link/ether 00:0c:29:38:85:75 brd ff:ff:ff:ff:ff:ff
    23. inet 192.168.100.111/24 brd 192.168.100.255 scope global ens34
    24. valid_lft forever preferred_lft forever
    25. inet 169.254.10.54/19 brd 169.254.31.255 scope global ens34:1
    26. valid_lft forever preferred_lft forever
    27. inet6 fe80::20c:29ff:fe38:8575/64 scope link
    28. valid_lft forever preferred_lft forever
    29. 4: virbr0: mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    30. link/ether 52:54:00:52:aa:7a brd ff:ff:ff:ff:ff:ff
    31. inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
    32. valid_lft forever preferred_lft forever
    33. 5: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    34. link/ether 52:54:00:52:aa:7a brd ff:ff:ff:ff:ff:ff
    35. [grid@node1 grid]$

    查看node2 IP信息

    1. [grid@node2 ~]$
    2. [grid@node2 ~]$ ip a
    3. 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    4. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    5. inet 127.0.0.1/8 scope host lo
    6. valid_lft forever preferred_lft forever
    7. inet6 ::1/128 scope host
    8. valid_lft forever preferred_lft forever
    9. 2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    10. link/ether 00:0c:29:14:58:46 brd ff:ff:ff:ff:ff:ff
    11. inet 192.168.8.112/24 brd 192.168.8.255 scope global ens33
    12. valid_lft forever preferred_lft forever
    13. inet 192.168.8.115/24 brd 192.168.8.255 scope global secondary ens33:1
    14. valid_lft forever preferred_lft forever
    15. inet 192.168.8.114/24 brd 192.168.8.255 scope global secondary ens33:2
    16. valid_lft forever preferred_lft forever
    17. inet6 fe80::20c:29ff:fe14:5846/64 scope link
    18. valid_lft forever preferred_lft forever
    19. 3: ens34: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    20. link/ether 00:0c:29:14:58:50 brd ff:ff:ff:ff:ff:ff
    21. inet 192.168.100.112/24 brd 192.168.100.255 scope global ens34
    22. valid_lft forever preferred_lft forever
    23. inet 169.254.15.202/19 brd 169.254.31.255 scope global ens34:1
    24. valid_lft forever preferred_lft forever
    25. inet6 fe80::20c:29ff:fe14:5850/64 scope link
    26. valid_lft forever preferred_lft forever
    27. 4: virbr0: mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    28. link/ether 52:54:00:b8:6c:cb brd ff:ff:ff:ff:ff:ff
    29. inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
    30. valid_lft forever preferred_lft forever
    31. 5: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    32. link/ether 52:54:00:b8:6c:cb brd ff:ff:ff:ff:ff:ff
    33. [grid@node2 ~]$

    创建用于DB的磁盘组

    使用GRID用户,运行asmca:

    [grid@node1 grid]$ asmca

    查询磁盘组挂载状态以及CRSD状态

    1. [grid@node1 ~]$
    2. [grid@node1 ~]$ sqlplus / as sysasm
    3. SQL*Plus: Release 19.0.0.0.0 - Production on Wed Nov 9 14:50:54 2022
    4. Version 19.3.0.0.0
    5. Copyright (c) 1982, 2019, Oracle. All rights reserved.
    6. Connected to:
    7. Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
    8. Version 19.3.0.0.0
    9. SQL> sekec^H^H
    10. SP2-0042: unknown command "sek" - rest of line ignored.
    11. SQL> select NAME,state from v$asm_diskgroup;
    12. NAME STATE
    13. ------------------------------ -----------
    14. OCR MOUNTED
    15. DATA MOUNTED
    16. FRA MOUNTED
    17. SQL>

    开始安装ORACLE

    SSH登陆到oracle用户,将下载的 zip 文件解压到ORACLE_HOME目录。

    [oracle@node1 ~]$ unzip LINUX.X64_193000_db_home.zip -d $ORACLE_HOME
    

    转到ORACLE_HOME目录

    cd $ORACLE_HOME
    

    然后运行runInstaller

    ./runInstaller
    

    注意提示,对于RAC,先安装软件,再运行DBCA创建数据库:

     

     

     

     

     

     

     

     

    开始安装数据库

    ssh连接到oracle用户,验证 DBCA 的要求

    /u01/app/19.3.0/grid/bin/cluvfy stage -pre dbcfg -fixup -n node1,node2 -d /u01/app/oracle/product/19.3.0/dbhome_1 -verbose

    运行dbca:

    dbca
    

     

     

     

     

     

     

     

     

     

     

     

     

     

    查看状态 

    1. [grid@node1 ~]$ crsctl status res -t
    2. --------------------------------------------------------------------------------
    3. Name Target State Server State details
    4. --------------------------------------------------------------------------------
    5. Local Resources
    6. --------------------------------------------------------------------------------
    7. ora.LISTENER.lsnr
    8. ONLINE ONLINE node1 STABLE
    9. ONLINE ONLINE node2 STABLE
    10. ora.chad
    11. ONLINE ONLINE node1 STABLE
    12. ONLINE ONLINE node2 STABLE
    13. ora.net1.network
    14. ONLINE ONLINE node1 STABLE
    15. ONLINE ONLINE node2 STABLE
    16. ora.ons
    17. ONLINE ONLINE node1 STABLE
    18. ONLINE ONLINE node2 STABLE
    19. --------------------------------------------------------------------------------
    20. Cluster Resources
    21. --------------------------------------------------------------------------------
    22. ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
    23. 1 ONLINE ONLINE node1 STABLE
    24. 2 ONLINE ONLINE node2 STABLE
    25. 3 OFFLINE OFFLINE STABLE
    26. ora.DATA.dg(ora.asmgroup)
    27. 1 ONLINE ONLINE node1 STABLE
    28. 2 ONLINE ONLINE node2 STABLE
    29. 3 ONLINE OFFLINE STABLE
    30. ora.FRA.dg(ora.asmgroup)
    31. 1 ONLINE ONLINE node1 STABLE
    32. 2 ONLINE ONLINE node2 STABLE
    33. 3 ONLINE OFFLINE STABLE
    34. ora.LISTENER_SCAN1.lsnr
    35. 1 ONLINE ONLINE node1 STABLE
    36. ora.LISTENER_SCAN2.lsnr
    37. 1 ONLINE ONLINE node2 STABLE
    38. ora.LISTENER_SCAN3.lsnr
    39. 1 ONLINE ONLINE node1 STABLE
    40. ora.OCR.dg(ora.asmgroup)
    41. 1 ONLINE ONLINE node1 STABLE
    42. 2 ONLINE ONLINE node2 STABLE
    43. 3 OFFLINE OFFLINE STABLE
    44. ora.asm(ora.asmgroup)
    45. 1 ONLINE ONLINE node1 Started,STABLE
    46. 2 ONLINE ONLINE node2 Started,STABLE
    47. 3 OFFLINE OFFLINE STABLE
    48. ora.asmnet1.asmnetwork(ora.asmgroup)
    49. 1 ONLINE ONLINE node1 STABLE
    50. 2 ONLINE ONLINE node2 STABLE
    51. 3 OFFLINE OFFLINE STABLE
    52. ora.cvu
    53. 1 ONLINE ONLINE node1 STABLE
    54. ora.node1.vip
    55. 1 ONLINE ONLINE node1 STABLE
    56. ora.node2.vip
    57. 1 ONLINE ONLINE node2 STABLE
    58. ora.qosmserver
    59. 1 ONLINE ONLINE node2 STABLE
    60. ora.racdb.db
    61. 1 ONLINE ONLINE node1 Open,HOME=/u01/app/o
    62. racle/product/19.3.0
    63. /dbhome_1,STABLE
    64. 2 ONLINE ONLINE node2 Open,HOME=/u01/app/o
    65. racle/product/19.3.0
    66. /dbhome_1,STABLE
    67. ora.scan1.vip
    68. 1 ONLINE ONLINE node1 STABLE
    69. ora.scan2.vip
    70. 1 ONLINE ONLINE node2 STABLE
    71. ora.scan3.vip
    72. 1 ONLINE ONLINE node1 STABLE
    73. --------------------------------------------------------------------------------
    74. [grid@node1 ~]$

     验证数据库状态:

    1. [grid@node1 ~]$
    2. [grid@node1 ~]$ srvctl status database -d racdb
    3. Instance racdb1 is running on node node1
    4. Instance racdb2 is running on node node2
    5. [grid@node1 ~]$

    查看数据库配置

    1. [grid@node1 ~]$
    2. [grid@node1 ~]$ srvctl status database -d racdb
    3. Instance racdb1 is running on node node1
    4. Instance racdb2 is running on node node2
    5. [grid@node1 ~]$ srvctl config database -d racdb
    6. Database unique name: racdb
    7. Database name: racdb
    8. Oracle home: /u01/app/oracle/product/19.3.0/dbhome_1
    9. Oracle user: oracle
    10. Spfile: +DATA/RACDB/PARAMETERFILE/spfile.272.1120332595
    11. Password file: +DATA/RACDB/PASSWORD/pwdracdb.256.1120329127
    12. Domain:
    13. Start options: open
    14. Stop options: immediate
    15. Database role: PRIMARY
    16. Management policy: AUTOMATIC
    17. Server pools:
    18. Disk Groups: DATA,FRA
    19. Mount point paths:
    20. Services:
    21. Type: RAC
    22. Start concurrency:
    23. Stop concurrency:
    24. OSDBA group: dba
    25. OSOPER group: oper
    26. Database instances: racdb1,racdb2
    27. Configured nodes: node1,node2
    28. CSS critical: no
    29. CPU count: 0
    30. Memory target: 0
    31. Maximum memory: 0
    32. Default network number for database services:
    33. Database is administrator managed
    34. [grid@node1 ~]$

    连接数据库查看

    1. [oracle@node1 ~]$ sqlplus / as sysdba
    2. SQL*Plus: Release 19.0.0.0.0 - Production on Wed Nov 9 19:38:59 2022
    3. Version 19.3.0.0.0
    4. Copyright (c) 1982, 2019, Oracle. All rights reserved.
    5. Connected to:
    6. Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
    7. Version 19.3.0.0.0
    8. SQL> select instance_name,status from gv$insta^H
    9. 2
    10. SQL> select instance_name,status from gv$Instance;
    11. INSTANCE_NAME STATUS
    12. ---------------- ------------
    13. racdb1 OPEN
    14. racdb2 OPEN
    15. SQL>
  • 相关阅读:
    MobileViT v2导出onnx模型时遇Col2Im算子无法导出问题
    使用excel文件生成sql脚本
    MySQL数据库四:MySQL数据库
    LIinux使用VIM掌握输入输出重定向与管道命令的应用
    Nexus 私服上传 jar 包 Connection rest
    【Hack The Box】linux练习-- Writer
    4.7k Star!全面的C#/.NET/.NET Core学习、工作、面试指南
    前端如何对cookie加密
    flex弹性盒模型与阿里图标的使用
    后台接口说明,你真的理解吗?
  • 原文地址:https://blog.csdn.net/u011868279/article/details/127749771