• NVIDIA NCC​L 源码学习(三)- 机器内拓扑分析


    上节介绍到所有节点执行了bootstrap网络连接的建立,接下来介绍下拓扑分析。

    由于gpu机器架构是多种多样的,一台机器上可能有多个网卡,多个gpu卡,卡间连接也各不相同,因此需要对机器内设备连接拓扑进行分析以达到各种拓扑结构下性能都尽可能的好。

    接着上回继续看initTransportsRank

    1. static ncclResult_t initTransportsRank(struct ncclComm* comm, ncclUniqueId* commId) {
    2. // We use 3 AllGathers
    3. // 1. { peerInfo, comm }
    4. // 2. ConnectTransport[nranks], ConnectValue[nranks]
    5. // 3. { nThreads, nrings, compCap, prev[MAXCHANNELS], next[MAXCHANNELS] }
    6. int rank = comm->rank;
    7. int nranks = comm->nRanks;
    8. uint64_t commHash = getHash(commId->internal, NCCL_UNIQUE_ID_BYTES);
    9. TRACE(NCCL_INIT, "comm %p, commHash %lx, rank %d nranks %d - BEGIN", comm, commHash, rank, nranks);
    10. NCCLCHECK(bootstrapInit(commId, rank, nranks, &comm->bootstrap));
    11. // AllGather1 - begin
    12. struct {
    13. struct ncclPeerInfo peerInfo;
    14. struct ncclComm* comm;
    15. } *allGather1Data;
    16. NCCLCHECK(ncclCalloc(&allGather1Data, nranks));
    17. allGather1Data[rank].comm = comm;
    18. struct ncclPeerInfo* myInfo = &allGather1Data[rank].peerInfo;
    19. NCCLCHECK(fillInfo(comm, myInfo, commHash));
    20. ...
    21. }

     创建nrank个allGather1Data,然后通过fillInfo 填充当前rank的peerInfo,ncclPeerInfo是rank的一些基本信息,比如rank号,在哪个机器的哪个进程等

    1. struct ncclPeerInfo {
    2. int rank;
    3. int cudaDev;
    4. int gdrSupport;
    5. uint64_t hostHash;
    6. uint64_t pidHash;
    7. dev_t shmDev;
    8. int64_t busId;
    9. };
    10. static ncclResult_t fillInfo(struct ncclComm* comm, struct ncclPeerInfo* info, uint64_t commHash) {
    11. info->rank = comm->rank;
    12. CUDACHECK(cudaGetDevice(&info->cudaDev));
    13. info->hostHash=getHostHash()+commHash;
    14. info->pidHash=getPidHash()+commHash;
    15. // Get the device MAJOR:MINOR of /dev/shm so we can use that
    16. // information to decide whether we can use SHM for inter-process
    17. // communication in a container environment
    18. struct stat statbuf;
    19. SYSCHECK(stat("/dev/shm", &statbuf), "stat");
    20. info->shmDev = statbuf.st_dev;
    21. info->busId = comm->busId;
    22. NCCLCHECK(ncclGpuGdrSupport(&info->gdrSupport));
    23. return ncclSuccess;
    24. }

    获取当前卡的rank,PCIe busId,/dev/shm的设备号,填充到ncclPeerInfo,然后通过ncclGpuGdrSupport查看是否支持gdr,rdma在通信前需要注册一段内存,使得网卡知道虚拟地址和物理地址的映射,但是如果每次通信都需要将data从显存拷贝到内存再通信的话效率就比较低,而IB提供了peer memory的接口,使得ib网卡可以访问其他PCIe空间,nv基于peer memory实现了自己的驱动,使得rdma可以直接注册显存,这样通信就可以避免host和device的内存拷贝,IB可以直接dma显存,即gdr。

    1. static ncclResult_t ncclGpuGdrSupport(int* gdrSupport) {
    2. int netDevs;
    3. NCCLCHECK(ncclNetDevices(&netDevs));
    4. *gdrSupport = 0;
    5. for (int dev=0; dev
    6. // Find a net device which is GDR-capable
    7. ncclNetProperties_t props;
    8. NCCLCHECK(ncclNet->getProperties(dev, &props));
    9. if ((props.ptrSupport & NCCL_PTR_CUDA) == 0) continue;
    10. // Allocate memory on the GPU and try to register it on the NIC.
    11. void *lComm = NULL, *sComm = NULL, *rComm = NULL;
    12. ncclNetHandle_t handle;
    13. void* gpuPtr = NULL;
    14. void* mHandle = NULL;
    15. NCCLCHECK(ncclNetListen(dev, &handle, &lComm));
    16. NCCLCHECK(ncclNetConnect(dev, &handle, &sComm));
    17. NCCLCHECK(ncclNetAccept(lComm, &rComm));
    18. CUDACHECK(cudaMalloc(&gpuPtr, GPU_BUF_SIZE));
    19. ncclDebugNoWarn = NCCL_NET;
    20. if (ncclNetRegMr(sComm, gpuPtr, GPU_BUF_SIZE, NCCL_PTR_CUDA, &mHandle) == ncclSuccess) {
    21. NCCLCHECK(ncclNetDeregMr(sComm, mHandle));
    22. NCCLCHECK(ncclNetRegMr(rComm, gpuPtr, GPU_BUF_SIZE, NCCL_PTR_CUDA, &mHandle));
    23. NCCLCHECK(ncclNetDeregMr(rComm, mHandle));
    24. *gdrSupport = 1;
    25. }
    26. ncclDebugNoWarn = 0;
    27. CUDACHECK(cudaFree(gpuPtr));
    28. NCCLCHECK(ncclNetCloseRecv(rComm));
    29. NCCLCHECK(ncclNetCloseSend(sComm));
    30. NCCLCHECK(ncclNetCloseListen(lComm));
    31. break;
    32. }
    33. return ncclSuccess;
    34. }

    这里会遍历每一个网卡,获取网卡的信息,由第一节可以知道这里的ncclNet就是ncclNetIb

    1. ncclResult_t ncclIbGdrSupport(int ibDev) {
    2. static int moduleLoaded = -1;
    3. if (moduleLoaded == -1) {
    4. moduleLoaded = (access("/sys/kernel/mm/memory_peers/nv_mem/version", F_OK) == -1) ? 0 : 1;
    5. }
    6. if (moduleLoaded == 0) return ncclSystemError;
    7. return ncclSuccess;
    8. }
    9. ncclResult_t ncclIbGetProperties(int dev, ncclNetProperties_t* props) {
    10. props->name = ncclIbDevs[dev].devName;
    11. props->pciPath = ncclIbDevs[dev].pciPath;
    12. props->guid = ncclIbDevs[dev].guid;
    13. props->ptrSupport = NCCL_PTR_HOST;
    14. if (ncclIbGdrSupport(dev) != ncclSuccess) {
    15. INFO(NCCL_NET,"NET/IB : GPU Direct RDMA Disabled for HCA %d '%s' (no module)", dev, ncclIbDevs[dev].devName);
    16. } else {
    17. props->ptrSupport |= NCCL_PTR_CUDA;
    18. }
    19. props->speed = ncclIbDevs[dev].speed;
    20. props->port = ncclIbDevs[dev].port + ncclIbDevs[dev].realPort;
    21. props->maxComms = ncclIbDevs[dev].maxQp;
    22. return ncclSuccess;
    23. }

    这里主要是获取网卡名,PCIe路径,guid等信息,然后查看是否有/sys/kernel/mm/memory_peers/nv_mem/version判断是否安装了nv_peermem,即nv的驱动,如果安装了的话则设置props->ptrSupport |= NCCL_PTR_CUDA,表示可以注册显存。

    然后尝试注册显存,如果可以注册则设置gdrSupport为1,这里其实会创建rdma连接,这个在后边会单独介绍,本次先略过。

    1. static ncclResult_t initTransportsRank(struct ncclComm* comm, ncclUniqueId* commId) {
    2. ...
    3. NCCLCHECK(bootstrapAllGather(comm->bootstrap, allGather1Data, sizeof(*allGather1Data)));
    4. NCCLCHECK(ncclCalloc(&comm->peerInfo, nranks+1)); // Extra rank to represent CollNet root
    5. for (int i = 0; i < nranks; i++) {
    6. memcpy(comm->peerInfo+i, &allGather1Data[i].peerInfo, sizeof(struct ncclPeerInfo));
    7. if ((i != rank) && (comm->peerInfo[i].hostHash == myInfo->hostHash) && (comm->peerInfo[i].busId == myInfo->busId)) {
    8. WARN("Duplicate GPU detected : rank %d and rank %d both on CUDA device %x", rank, i, myInfo->busId);
    9. return ncclInvalidUsage;
    10. }
    11. }
    12. // AllGather1 data is used again below
    13. // AllGather1 - end
    14. // Topo detection / System graph creation
    15. NCCLCHECK(ncclTopoGetSystem(comm, &comm->topo));
    16. ...
    17. }

    然后bootstrapAllGather广播allGather1Data,将获取到的其他节点peerinfo拷贝到comm里

    在看具体拓扑分析流程之前先简单了解一下PCIe的一些概念,一个简单的PCIe系统示例如下

    每个cpu都有自己的root complex,后简称为RC,RC会帮助cpu和其他部分通信,比如和内存,和PCIe系统,当cpu发送过来一个物理地址之后,如果这个地址是在PCIe空间,会被RC转换成PCIe请求进行通信。

    switch的作用是扩展PCIe端口,下边可以连接设备或者其他switch,上游来的请求被被他转发,PCIe设备可以连在RC,也可以连在swtich,一个switch的内部如下所示

    内部有一个PCIe总线 ,然后通过多个Bridge扩展出多个端口,其中上边的那个称为上游端口,其他的叫做下游端口

    前文有提到NCCL中很常用的一个变量名叫busId,比如gpu和ib网卡,注意区分NCCL里的busId并不是指的总线号,指的其实是定位一个PCIe设备用到的id,即BDF(bus + device + function),一个bus上有多个设备,一个设备有多个功能,因此通过BDF就可以定位一个设备,在机器启动完成PCIe的配置之后会将相关信息通过sysfs提供给用户,NCCL就是通过sysfs来完成拓扑检测的

    然后看下执行的ncclTopoGetSystem,这个函数就是本节的重点,会将当前rank的PCI树建立起来,分为两个步骤,先使用xml表示整个PCI树结构,然后基于xml转成ncclTopoNode,其中xml定义如下,一个ncclXmlNode表示了PCI树的一个节点

    1. struct ncclXmlNode {
    2. char name[MAX_STR_LEN];
    3. struct {
    4. char key[MAX_STR_LEN];
    5. char value[MAX_STR_LEN];
    6. } attrs[MAX_ATTR_COUNT+1]; // Need an extra one to consume extra params
    7. int nAttrs;
    8. int type;
    9. struct ncclXmlNode* parent;
    10. struct ncclXmlNode* subs[MAX_SUBS];
    11. int nSubs;
    12. };
    13. struct ncclXml {
    14. struct ncclXmlNode nodes[MAX_NODES];
    15. int maxIndex;
    16. };

    ncclXmlNode表示一个节点,记录了父节点和所有子节点,节点有name和attr,通过xmlSetAttr进行设置属性

    ncclXml中预分配了所有的node,maxIndex表示分配到了哪里,然后简单介绍下几个xml相关的api

    static ncclResult_t xmlAddNode(struct ncclXml* xml, struct ncclXmlNode* parent, const char* subName, struct ncclXmlNode** sub);

    xmlAddNode进行node的分配,表示在xml里新申请一个节点sub,sub的name设置为subName,父节点为parent 

    static ncclResult_t xmlFindTagKv(struct ncclXml* xml, const char* tagName, struct ncclXmlNode** node, const char* attrName, const char* attrValue)

    xmlFindTagKv会遍历xml已分配的节点,找到节点名为tagName的节点n,然后判断节点n["attrName"]是否等于attrValue,如果相等,则设置node为n

    static ncclResult_t xmlGetAttrIndex(struct ncclXmlNode* node, const char* attrName, int* index)

    xmlGetAttrIndex会查看attrName是node的第几个属性

    然后开始看拓扑分析的过程

    1. ncclResult_t ncclTopoGetSystem(struct ncclComm* comm, struct ncclTopoSystem** system) {
    2. struct ncclXml* xml;
    3. NCCLCHECK(ncclCalloc(&xml, 1));
    4. char* xmlTopoFile = getenv("NCCL_TOPO_FILE");
    5. if (xmlTopoFile) {
    6. INFO(NCCL_ENV, "NCCL_TOPO_FILE set by environment to %s", xmlTopoFile);
    7. NCCLCHECK(ncclTopoGetXmlFromFile(xmlTopoFile, xml));
    8. }
    9. if (xml->maxIndex == 0) {
    10. // Create top tag
    11. struct ncclXmlNode* top;
    12. NCCLCHECK(xmlAddNode(xml, NULL, "system", &top));
    13. NCCLCHECK(xmlSetAttrInt(top, "version", NCCL_TOPO_XML_VERSION));
    14. }
    15. // Auto-detect GPUs if needed
    16. for (int r=0; rnRanks; r++) {
    17. if (comm->peerInfo[r].hostHash == comm->peerInfo[comm->rank].hostHash) {
    18. char busId[NVML_DEVICE_PCI_BUS_ID_BUFFER_SIZE];
    19. NCCLCHECK(int64ToBusId(comm->peerInfo[r].busId, busId));
    20. struct ncclXmlNode* node;
    21. NCCLCHECK(ncclTopoFillGpu(xml, busId, &node));
    22. if (node == NULL) continue;
    23. NCCLCHECK(xmlSetAttrInt(node, "rank", r));
    24. NCCLCHECK(xmlInitAttrInt(node, "gdr", comm->peerInfo[r].gdrSupport));
    25. }
    26. }
    27. ...
    28. }

    首先通过xmlAddNode创建根节点"system"(后续使用双引号表示xml树节点),并设置根节点属性"system" ["version"] = NCCL_TOPO_XML_VERSION,然后遍历每个rank的hosthash,如果相等的话说明在同一个机器,然后执行ncclTopoFillGpu,将gpu加入到xml树

    1. ncclResult_t ncclTopoFillGpu(struct ncclXml* xml, const char* busId, struct ncclXmlNode** gpuNode) {
    2. struct ncclXmlNode* node;
    3. NCCLCHECK(ncclTopoGetPciNode(xml, busId, &node));
    4. NCCLCHECK(ncclTopoGetXmlFromSys(node, xml));
    5. ...
    6. }
    1. ncclResult_t ncclTopoGetPciNode(struct ncclXml* xml, const char* busId, struct ncclXmlNode** pciNode) {
    2. NCCLCHECK(xmlFindTagKv(xml, "pci", pciNode, "busid", busId));
    3. if (*pciNode == NULL) {
    4. NCCLCHECK(xmlAddNode(xml, NULL, "pci", pciNode));
    5. }
    6. NCCLCHECK(xmlSetAttr(*pciNode, "busid", busId));
    7. return ncclSuccess;
    8. }

    通过ncclTopoGetPciNode获取xml中的有没有创建当前卡的xml node,此时没有,所以就新建一个xml node叫做"pci",表示当前gpu卡,设置"pci"["busid"]=busd

    然后执行ncclTopoGetXmlFromSys,这个函数主要逻辑就是在sysfs中获取gpu节点到cpu的路径,通过这个路径转成xml树,并读取该路径下相关属性设置到xml里

    1. ncclResult_t ncclTopoGetXmlFromSys(struct ncclXmlNode* pciNode, struct ncclXml* xml) {
    2. // Fill info, then parent
    3. const char* busId;
    4. NCCLCHECK(xmlGetAttr(pciNode, "busid", &busId));
    5. char* path = NULL;
    6. int index;
    7. NCCLCHECK(xmlGetAttrIndex(pciNode, "class", &index));
    8. if (index == -1) {
    9. if (path == NULL) NCCLCHECK(getPciPath(busId, &path));
    10. NCCLCHECK(ncclTopoSetAttrFromSys(pciNode, path, "class", "class"));
    11. }
    12. NCCLCHECK(xmlGetAttrIndex(pciNode, "link_speed", &index));
    13. if (index == -1) {
    14. if (path == NULL) NCCLCHECK(getPciPath(busId, &path));
    15. char deviceSpeedStr[MAX_STR_LEN];
    16. float deviceSpeed;
    17. NCCLCHECK(ncclTopoGetStrFromSys(path, "max_link_speed", deviceSpeedStr));
    18. sscanf(deviceSpeedStr, "%f GT/s", &deviceSpeed);
    19. char portSpeedStr[MAX_STR_LEN];
    20. float portSpeed;
    21. NCCLCHECK(ncclTopoGetStrFromSys(path, "../max_link_speed", portSpeedStr));
    22. sscanf(portSpeedStr, "%f GT/s", &portSpeed);
    23. NCCLCHECK(xmlSetAttr(pciNode, "link_speed", portSpeed < deviceSpeed ? portSpeedStr : deviceSpeedStr));
    24. }
    25. NCCLCHECK(xmlGetAttrIndex(pciNode, "link_width", &index));
    26. if (index == -1) {
    27. if (path == NULL) NCCLCHECK(getPciPath(busId, &path));
    28. char strValue[MAX_STR_LEN];
    29. NCCLCHECK(ncclTopoGetStrFromSys(path, "max_link_width", strValue));
    30. int deviceWidth = strtol(strValue, NULL, 0);
    31. NCCLCHECK(ncclTopoGetStrFromSys(path, "../max_link_width", strValue));
    32. int portWidth = strtol(strValue, NULL, 0);
    33. NCCLCHECK(xmlSetAttrInt(pciNode, "link_width", std::min(deviceWidth,portWidth)));
    34. }
    35. ...
    36. }

    首先设置pciNode的各种属性,通过getPciPath获取busId对应的sysfs路径path,其实这个路径就是PCI树中根到叶结点的路径

    1. static ncclResult_t getPciPath(const char* busId, char** path) {
    2. char busPath[] = "/sys/class/pci_bus/0000:00/../../0000:00:00.0";
    3. memcpylower(busPath+sizeof("/sys/class/pci_bus/")-1, busId, BUSID_REDUCED_SIZE-1);
    4. memcpylower(busPath+sizeof("/sys/class/pci_bus/0000:00/../../")-1, busId, BUSID_SIZE-1);
    5. *path = realpath(busPath, NULL);
    6. if (*path == NULL) {
    7. WARN("Could not find real path of %s", busPath);
    8. return ncclSystemError;
    9. }
    10. return ncclSuccess;
    11. }

    举个例子比如path是 /sys/devices/pci0000:10/0000:10:00.0/0000:11:00.0/0000:12:00.0/0000:13:00.0/0000
    :14:00.0/0000:15:00.0/0000:16:00.0/0000:17:00.0,其中gpu的busId是0000:17:00.0,那么这个path对应下图,注意,下图略去了15:00.0对应的switch

     然后读取path下的属性,获取class(PCI设备类型),link_speed,link_width等设置到xml pciNode中,ncclTopoGetStrFromSys其实就是读取path下的内核文件保存到strValue

    1. ncclResult_t ncclTopoGetStrFromSys(const char* path, const char* fileName, char* strValue) {
    2. char filePath[PATH_MAX];
    3. sprintf(filePath, "%s/%s", path, fileName);
    4. int offset = 0;
    5. FILE* file;
    6. if ((file = fopen(filePath, "r")) != NULL) {
    7. while (feof(file) == 0 && ferror(file) == 0 && offset < MAX_STR_LEN) {
    8. int len = fread(strValue+offset, 1, MAX_STR_LEN-offset, file);
    9. offset += len;
    10. }
    11. fclose(file);
    12. }
    13. if (offset == 0) {
    14. strValue[0] = '\0';
    15. INFO(NCCL_GRAPH, "Topology detection : could not read %s, ignoring", filePath);
    16. } else {
    17. strValue[offset-1] = '\0';
    18. }
    19. return ncclSuccess;
    20. }

     

    1. ncclResult_t ncclTopoGetXmlFromSys(struct ncclXmlNode* pciNode, struct ncclXml* xml) {
    2. // Fill info, then parent
    3. ...
    4. struct ncclXmlNode* parent = pciNode->parent;
    5. if (parent == NULL) {
    6. if (path == NULL) NCCLCHECK(getPciPath(busId, &path));
    7. // Save that for later in case next step is a CPU
    8. char numaIdStr[MAX_STR_LEN];
    9. NCCLCHECK(ncclTopoGetStrFromSys(path, "numa_node", numaIdStr));
    10. // Go up one level in the PCI tree. Rewind two "/" and follow the upper PCI
    11. // switch, or stop if we reach a CPU root complex.
    12. int slashCount = 0;
    13. int parentOffset;
    14. for (parentOffset = strlen(path)-1; parentOffset>0; parentOffset--) {
    15. if (path[parentOffset] == '/') {
    16. slashCount++;
    17. path[parentOffset] = '\0';
    18. int start = parentOffset - 1;
    19. while (start>0 && path[start] != '/') start--;
    20. // Check whether the parent path looks like "BBBB:BB:DD.F" or not.
    21. if (checkBDFFormat(path+start+1) == 0) {
    22. // This a CPU root complex. Create a CPU tag and stop there.
    23. struct ncclXmlNode* topNode;
    24. NCCLCHECK(xmlFindTag(xml, "system", &topNode));
    25. NCCLCHECK(xmlGetSubKv(topNode, "cpu", &parent, "numaid", numaIdStr));
    26. if (parent == NULL) {
    27. NCCLCHECK(xmlAddNode(xml, topNode, "cpu", &parent));
    28. NCCLCHECK(xmlSetAttr(parent, "numaid", numaIdStr));
    29. }
    30. } else if (slashCount == 2) {
    31. // Continue on the upper PCI switch
    32. for (int i = strlen(path)-1; i>0; i--) {
    33. if (path[i] == '/') {
    34. NCCLCHECK(xmlFindTagKv(xml, "pci", &parent, "busid", path+i+1));
    35. if (parent == NULL) {
    36. NCCLCHECK(xmlAddNode(xml, NULL, "pci", &parent));
    37. NCCLCHECK(xmlSetAttr(parent, "busid", path+i+1));
    38. }
    39. break;
    40. }
    41. }
    42. }
    43. }
    44. if (parent) break;
    45. }
    46. pciNode->parent = parent;
    47. parent->subs[parent->nSubs++] = pciNode;
    48. }
    49. if (strcmp(parent->name, "pci") == 0) {
    50. NCCLCHECK(ncclTopoGetXmlFromSys(parent, xml));
    51. } else if (strcmp(parent->name, "cpu") == 0) {
    52. NCCLCHECK(ncclTopoGetXmlFromCpu(parent, xml));
    53. }
    54. free(path);
    55. return ncclSuccess;
    56. }

     然后从pciNode开始往上跳,因为一个switch的上游端口和下游端口分别对应了一个bridge,NCCL使用上游端口bridge的busid表示这个switch,因此这里要向上跳两次再建立一个xml node表示这个switch,往上找到一个PCI设备就将slashCount加一,当slashCount==2就找到了一个switch上游端口,这个时候创建一个新的xml pci节点parent表示当前switch,然后将当前节点pciNode链接到parent,此时parent仍然是xml pci节点,因此继续递归执行ncclTopoGetXmlFromSys,直到遇到RC,此时给"system"创建一个子节点"cpu",停止递归,然后执行ncclTopoGetXmlFromCpu,设置"cpu"的各种属性,比如arch(比如x86还是arm),affinity(该cpu的numa都有哪些cpu core),numaid等。

    到这里ncclTopoGetXmlFromSys就执行结束了,接着看ncclTopoFillGpu

    1. ncclResult_t ncclTopoFillGpu(struct ncclXml* xml, const char* busId, struct ncclXmlNode** gpuNode) {
    2. ...
    3. NCCLCHECK(wrapNvmlSymbols());
    4. NCCLCHECK(wrapNvmlInit());
    5. nvmlDevice_t nvmlDev;
    6. if (wrapNvmlDeviceGetHandleByPciBusId(busId, &nvmlDev) != ncclSuccess) nvmlDev = NULL;
    7. NCCLCHECK(ncclTopoGetXmlFromGpu(node, nvmlDev, xml, gpuNode));
    8. return ncclSuccess;
    9. }

    然然后通过wrapNvmlSymbols加载动态库libnvidia-ml.so.1,用来获取gpu的相关信息

    1. ncclResult_t ncclTopoGetXmlFromGpu(struct ncclXmlNode* pciNode, nvmlDevice_t nvmlDev, struct ncclXml* xml, struct ncclXmlNode** gpuNodeRet) {
    2. struct ncclXmlNode* gpuNode = NULL;
    3. NCCLCHECK(xmlGetSub(pciNode, "gpu", &gpuNode));
    4. if (gpuNode == NULL) NCCLCHECK(xmlAddNode(xml, pciNode, "gpu", &gpuNode));
    5. int index = -1;
    6. int dev = -1;
    7. NCCLCHECK(xmlGetAttrIndex(gpuNode, "dev", &index));
    8. if (index == -1) {
    9. if (nvmlDev == NULL) {
    10. WARN("No NVML, trying to use CUDA instead");
    11. const char* busId;
    12. NCCLCHECK(xmlGetAttr(pciNode, "busid", &busId));
    13. if (busId == NULL || cudaDeviceGetByPCIBusId(&dev, busId) != cudaSuccess) dev = -1;
    14. } else {
    15. NCCLCHECK(wrapNvmlDeviceGetIndex(nvmlDev, (unsigned int*)&dev));
    16. }
    17. NCCLCHECK(xmlSetAttrInt(gpuNode, "dev", dev));
    18. }
    19. NCCLCHECK(xmlGetAttrInt(gpuNode, "dev", &dev));
    20. if (dev == -1) { *gpuNodeRet = NULL; return ncclSuccess; }
    21. NCCLCHECK(xmlGetAttrIndex(gpuNode, "sm", &index));
    22. if (index == -1) {
    23. int cudaMajor, cudaMinor;
    24. if (nvmlDev == NULL) {
    25. cudaDeviceProp devProp;
    26. CUDACHECK(cudaGetDeviceProperties(&devProp, dev));
    27. cudaMajor = devProp.major; cudaMinor = devProp.minor;
    28. } else {
    29. NCCLCHECK(wrapNvmlDeviceGetCudaComputeCapability(nvmlDev, &cudaMajor, &cudaMinor));
    30. }
    31. NCCLCHECK(xmlSetAttrInt(gpuNode, "sm", cudaMajor*10+cudaMinor));
    32. }
    33. int sm;
    34. NCCLCHECK(xmlGetAttrInt(gpuNode, "sm", &sm));
    35. struct ncclXmlNode* nvlNode = NULL;
    36. NCCLCHECK(xmlGetSub(pciNode, "nvlink", &nvlNode));
    37. if (nvlNode == NULL) {
    38. // NVML NVLink detection
    39. int maxNvLinks = (sm < 60) ? 0 : (sm < 70) ? 4 : (sm < 80) ? 6 : 12;
    40. if (maxNvLinks > 0 && nvmlDev == NULL) {
    41. WARN("No NVML device handle. Skipping nvlink detection.\n");
    42. maxNvLinks = 0;
    43. }
    44. for (int l=0; l
    45. // Check whether we can use this NVLink for P2P
    46. unsigned canP2P;
    47. if ((wrapNvmlDeviceGetNvLinkCapability(nvmlDev, l, NVML_NVLINK_CAP_P2P_SUPPORTED, &canP2P) != ncclSuccess) || !canP2P) continue;
    48. // Make sure the Nvlink is up. The previous call should have trained the link.
    49. nvmlEnableState_t isActive;
    50. if ((wrapNvmlDeviceGetNvLinkState(nvmlDev, l, &isActive) != ncclSuccess) || (isActive != NVML_FEATURE_ENABLED)) continue;
    51. // Try to figure out what's on the other side of the NVLink
    52. nvmlPciInfo_t remoteProc;
    53. if (wrapNvmlDeviceGetNvLinkRemotePciInfo(nvmlDev, l, &remoteProc) != ncclSuccess) continue;
    54. // Make a lower case copy of the bus ID for calling ncclDeviceType
    55. // PCI system path is in lower case
    56. char* p = remoteProc.busId;
    57. char lowerId[NVML_DEVICE_PCI_BUS_ID_BUFFER_SIZE];
    58. for (int c=0; c
    59. lowerId[c] = tolower(p[c]);
    60. if (p[c] == 0) break;
    61. }
    62. NCCLCHECK(xmlGetSubKv(gpuNode, "nvlink", &nvlNode, "target", lowerId));
    63. if (nvlNode == NULL) {
    64. NCCLCHECK(xmlAddNode(xml, gpuNode, "nvlink", &nvlNode));
    65. NCCLCHECK(xmlSetAttr(nvlNode, "target", lowerId));
    66. NCCLCHECK(xmlSetAttrInt(nvlNode, "count", 1));
    67. } else {
    68. int count;
    69. NCCLCHECK(xmlGetAttrInt(nvlNode, "count", &count));
    70. NCCLCHECK(xmlSetAttrInt(nvlNode, "count", count+1));
    71. }
    72. }
    73. }
    74. // Fill target classes
    75. for (int s=0; snSubs; s++) {
    76. struct ncclXmlNode* sub = gpuNode->subs[s];
    77. if (strcmp(sub->name, "nvlink") != 0) continue;
    78. int index;
    79. NCCLCHECK(xmlGetAttrIndex(sub, "tclass", &index));
    80. if (index == -1) {
    81. const char* busId;
    82. NCCLCHECK(xmlGetAttr(sub, "target", &busId));
    83. if (strcmp(busId, "fffffff:ffff:ff") == 0) {
    84. // Remote NVLink device is not visible inside this VM. Assume NVSwitch.
    85. NCCLCHECK(xmlSetAttr(sub, "tclass", "0x068000"));
    86. } else {
    87. char* path;
    88. NCCLCHECK(getPciPath(busId, &path));
    89. NCCLCHECK(ncclTopoSetAttrFromSys(sub, path, "class", "tclass"));
    90. }
    91. }
    92. }
    93. *gpuNodeRet = gpuNode;
    94. return ncclSuccess;
    95. }

    首先在xml gpu节点"pci"下创建节点"gpu",然后设置"gpu"节点的属性,比如dev,计算能力sm,然后开始查询nvlink相关信息,遍历所有可能的nvlink,通过nvmlDeviceGetNvLinkCapability查询nvlink信息,如果这个nvlink被启用,那么在"gpu"节点下新建一个"nvlink"节点,设置"target"属性表示nvlink对端的PCIe busId,将"target"相同的"nvlink"节点表示为一个,用"count"表示起止点之间有多少条nvlink,然后设置属性"tclass"表示"target"是什么类型的PCI设备

    到这里ncclTopoFillGpu就执行结束了,此时xml如下所示,图里只展示了一张网卡的情况,其中"gpu"和他的父节点其实都是指的同一个gpu

     然后回到ncclTopoGetSystem,会设置"gpu"的rank和gdr属性

    然后是对于所有的网卡,类似上述gpu的过程,通过ncclTopoGetXmlFromSys建立xml树,如下所示,只展示一张网卡的情况,其中"net","nic"和"nic"的父节点都表示同一张网卡

    最后我们看下对应的xml长啥样

    1. <system version="1">
    2. <cpu numaid="0" affinity="00000000,0000000f,ffff0000,00000000,000fffff" arch="x86_64" vendor="GenuineIntel" familyid="6" modelid="85">
    3. <pci busid="0000:11:00.0" class="0x060400" link_speed="8 GT/s" link_width="16">
    4. <pci busid="0000:13:00.0" class="0x060400" link_speed="8 GT/s" link_width="16">
    5. <pci busid="0000:15:00.0" class="0x060400" link_speed="8 GT/s" link_width="16">
    6. <pci busid="0000:17:00.0" class="0x030200" link_speed="16 GT/s" link_width="16">
    7. <gpu dev="0" sm="80" rank="0" gdr="1">
    8. <nvlink target="0000:e7:00.0" count="2" tclass="0x068000"/>
    9. <nvlink target="0000:e4:00.0" count="2" tclass="0x068000"/>
    10. <nvlink target="0000:e6:00.0" count="2" tclass="0x068000"/>
    11. <nvlink target="0000:e9:00.0" count="2" tclass="0x068000"/>
    12. <nvlink target="0000:e5:00.0" count="2" tclass="0x068000"/>
    13. <nvlink target="0000:e8:00.0" count="2" tclass="0x068000"/>
    14. gpu>
    15. pci>
    16. pci>
    17. pci>
    18. <pci busid="0000:1c:00.0" class="0x020000" link_speed="8 GT/s" link_width="16">
    19. <nic>
    20. <net name="mlx5_0" dev="0" speed="100000" port="1" guid="0x82d0c0003f6ceb8" maxconn="262144" gdr="1"/>
    21. nic>
    22. pci>
    23. pci>
    24. cpu>
    25. system>

    总结一下,本节主要介绍了NCCL拓扑分析的过程,通过sysfs将gpu和网卡对应的pci树结构建立出来了xml树。

  • 相关阅读:
    spark hdfs azure对象存储
    Defensor 4.5:构建数据资产为中心的安全运营体系
    ubuntu22.4配置nginx和php
    Win7 IIS7解析漏洞复现
    数据中心液冷服务器详情说明
    JVM简单理解
    0 至 10 之间,10以内的儿童数学题 生成工具 代码段 JavaScript
    Django笔记二十九之中间件介绍
    现货白银图表分析的依据
    助力企业前行——Scala&Spark最佳实践课程
  • 原文地址:https://blog.csdn.net/KIDGIN7439/article/details/126990961