They rarely notice the inherent training objective gap between the pretext and downstream tasks. This significant gap often requires costly fine-tuning for adapting the pre-trained model to downstream problem, which prevents the efficient elicitation of pre-trained knowledge and then results in poor results
To bridge the task gap, we propose a novel transfer learning paradigm to generalize GNNs, namely graph pre-training and prompt tuning (GPPT).


·将下游任务中class和node及其邻居组成的子图类比为上游任务中的两个点(解释为将class作为虚拟节点添加到graph中),计算二者连接的可能性,连接可能性最大class即为node的class;
·关键在于如何获得下游任务中的class和node的emb


作者先对node进行聚类,然后在每一类下分别对class计算emb。因为作者认为不同类间的class应当有不同的emb表示,而不能直接在整张图上对每个class都采用相同的emb。
这里task token embedding是可训练向量。
task token embedding的初始化方式及loss函数如下:


即聚合目标节点和邻居节点信息