目录
16-NIPS-improved deep metric learning with multi-class n-pair loss objective
multi-class N-pair loss (N-pair-mc)
one-vs-one N-pair loss (N-pair-ovo)
L2 norm regularization of embedding vectors
同时推开多个类的负样本
N=2 类似triplet loss
(4)(5)
SoftPlus 激活函数:
近似relu函数
不包含0,解决了 Dead ReLU 问题,但不包含负区间,不能加速学习
N=L 类似softmax loss
排除范数的影响——>归一化
——>严格限制|fTf+|<1,优化困难——>正则化嵌入向量的L2范数让它小
- for anchor, positive, negative_set in zip(anchors, positives, negatives):
- a_embs, p_embs, n_embs = batch[anchor:anchor+1], batch[positive:positive+1], batch[negative_set]
- inner_sum = a_embs[:,None,:].bmm((n_embs - p_embs[:,None,:]).permute(0,2,1))
- inner_sum = inner_sum.view(inner_sum.shape[0], inner_sum.shape[-1])
- loss = loss + torch.mean(torch.log(torch.sum(torch.exp(inner_sum), dim=1) + 1))/len(anchors)
- loss = loss + self.l2_weight*torch.mean(torch.norm(batch, p=2, dim=1))/len(anchors)
adapt the smooth upper bound of triplet loss in Equation (4) instead of large-margin formulation in all our experiments to be consistent with N-pair-mc losses.
multi-class N-pair loss表现更好:one-vs-one N-pair loss解耦,每个负样本损失都是独立的
在固定batch size下,每个类采集一对样本,样本来自的类变多。训练涉及的负类越多越好。