• 8月6日Pytorch笔记——GAN、WGAN



    前言

    本文为8月6日Pytorch笔记,分为两个章节:

    • GAN;
    • WGAN。

    一、GAN

    1、GAN 原理

    • Goal: p ( x ) p(x) p(x)
      1
      -How to train?
      m i n G , m a x D   L ( D , G ) = E x − p r ( x ) [ l o g D ( x ) ] + E z − p z ( z ) [ l o g ( 1 − D ( G ( z ) ) ) ] V ( D , G ) = E x − p r ( x ) [ l o g D ( x ) ] + E x − p g ( x ) [ l o g ( 1 − D ( x ) ] p ( z ) = p ( x g ) min_G, max_D\ L(D, G) = \mathbb{E}_{x-p_r(x)} [logD(x)] + \mathbb{E}_{z-p_z(z)} [log(1 - D(G(z)))]\\ V(D, G) = \mathbb{E}_{x-p_r(x)} [log D(x)] + \mathbb{E}_{x-p_g(x)} [log(1 - D(x)]\\ p(z ) = p(x_g) minG,maxD L(D,G)=Expr(x)[logD(x)]+Ezpz(z)[log(1D(G(z)))]V(D,G)=Expr(x)[logD(x)]+Expg(x)[log(1D(x)]p(z)=p(xg)

      • 固定 G G G,最优的 D D D 为:
        D G ∗ ( x ) = p d a t a ( x ) p d a t a ( x ) + p g ( x ) D^*_G(x) = \frac{p_{data}(x)}{p_{data}(x) + p_g(x)} DG(x)=pdata(x)+pg(x)pdata(x)
    • KL divergence:
      2
      D K L ( p ∣ ∣ q ) = ∫ x p ( x )   l o g p ( x ) q ( x ) d x D_{KL}(p||q) = \int_{x} p(x)\ log\frac{p(x)}{q(x)}dx DKL(p∣∣q)=xp(x) logq(x)p(x)dx

    3

    • J-S divergence:
      D J S ( p ∣ ∣ q ) = 1 2 D K L ( p ∣ ∣ p + q 2 ) + 1 2 D K L ( q ∣ ∣ p + q 2 ) D_{JS}(p||q) = \frac{1}{2}D_{KL}(p||\frac{p+q}{2}) + \frac{1}{2} D_{KL}(q||\frac{p+q}{2} ) DJS(p∣∣q)=21DKL(p∣∣2p+q)+21DKL(q∣∣2p+q)

      • 找到最优的 D D D 后, G G G 如何更新:
        D J S ( p r ∣ ∣ p g ) = 1 2 ( l o g   4 + L ( G , D ∗ ) ) L ( G , D ∗ ) = 2 D J S ( p r ∣ ∣ p g ) − 2   l o g   2 D_{JS}(p_r||p_g) = \frac{1}{2} (log\ 4 + L(G, D^*))\\ L(G, D^*) = 2 D_{JS}(p_r||p_g) - 2\ log\ 2 DJS(pr∣∣pg)=21(log 4+L(G,D))L(G,D)=2DJS(pr∣∣pg)2 log 2
    • 只要 θ ≠ 0 \theta \ne 0 θ=0, K L ⇒ ∞ , J S ⇒ l o g 2 KL ⇒ \infty, JS ⇒ log2 KL,JSlog2

    • J-S divergence 的缺陷:

      1. 当两个分布不重合时, J S ( P G , P d a t a ) = l o g   2 JS(P_G, P_{data}) = log\ 2 JS(PG,Pdata)=log 2
      2. θ = 0 \theta = 0 θ=0时, J S ( P G , P d a t a ) = 0 JS(P_G, P_{data}) = 0 JS(PG,Pdata)=0

    代码如下:

    import  torch
    from    torch import nn, optim, autograd
    import  numpy as np
    import  visdom
    from    torch.nn import functional as F
    from    matplotlib import pyplot as plt
    import  random
    
    h_dim = 400
    batchsz = 512
    viz = visdom.Visdom()
    
    class Generator(nn.Module):
    
        def __init__(self):
            super(Generator, self).__init__()
    
            self.net = nn.Sequential(
                nn.Linear(2, h_dim),
                nn.ReLU(True),
                nn.Linear(h_dim, h_dim),
                nn.ReLU(True),
                nn.Linear(h_dim, h_dim),
                nn.ReLU(True),
                nn.Linear(h_dim, 2),
            )
    
        def forward(self, z):
            output = self.net(z)
            return output
    
    
    class Discriminator(nn.Module):
    
        def __init__(self):
            super(Discriminator, self).__init__()
    
            self.net = nn.Sequential(
                nn.Linear(2, h_dim),
                nn.ReLU(True),
                nn.Linear(h_dim, h_dim),
                nn.ReLU(True),
                nn.Linear(h_dim, h_dim),
                nn.ReLU(True),
                nn.Linear(h_dim, 1),
                nn.Sigmoid()
            )
    
        def forward(self, x):
            output = self.net(x)
            return output.view(-1)
    
    def data_generator():
    
        scale = 2.
        centers = [
            (1, 0),
            (-1, 0),
            (0, 1),
            (0, -1),
            (1. / np.sqrt(2), 1. / np.sqrt(2)),
            (1. / np.sqrt(2), -1. / np.sqrt(2)),
            (-1. / np.sqrt(2), 1. / np.sqrt(2)),
            (-1. / np.sqrt(2), -1. / np.sqrt(2))
        ]
        centers = [(scale * x, scale * y) for x, y in centers]
        while True:
            dataset = []
            for i in range(batchsz):
                point = np.random.randn(2) * .02
                center = random.choice(centers)
                point[0] += center[0]
                point[1] += center[1]
                dataset.append(point)
            dataset = np.array(dataset, dtype='float32')
            dataset /= 1.414  # stdev
            yield dataset
    
        # for i in range(100000//25):
        #     for x in range(-2, 3):
        #         for y in range(-2, 3):
        #             point = np.random.randn(2).astype(np.float32) * 0.05
        #             point[0] += 2 * x
        #             point[1] += 2 * y
        #             dataset.append(point)
        #
        # dataset = np.array(dataset)
        # print('dataset:', dataset.shape)
        # viz.scatter(dataset, win='dataset', opts=dict(title='dataset', webgl=True))
        #
        # while True:
        #     np.random.shuffle(dataset)
        #
        #     for i in range(len(dataset)//batchsz):
        #         yield dataset[i*batchsz : (i+1)*batchsz]
    
    
    def generate_image(D, G, xr, epoch):
        """
        Generates and saves a plot of the true distribution, the generator, and the
        critic.
        """
        N_POINTS = 128
        RANGE = 3
        plt.clf()
    
        points = np.zeros((N_POINTS, N_POINTS, 2), dtype='float32')
        points[:, :, 0] = np.linspace(-RANGE, RANGE, N_POINTS)[:, None]
        points[:, :, 1] = np.linspace(-RANGE, RANGE, N_POINTS)[None, :]
        points = points.reshape((-1, 2))
        # (16384, 2)
        # print('p:', points.shape)
    
        # draw contour
        with torch.no_grad():
            points = torch.Tensor(points).cuda() # [16384, 2]
            disc_map = D(points).cpu().numpy() # [16384]
        x = y = np.linspace(-RANGE, RANGE, N_POINTS)
        cs = plt.contour(x, y, disc_map.reshape((len(x), len(y))).transpose())
        plt.clabel(cs, inline=1, fontsize=10)
        # plt.colorbar()
    
    
        # draw samples
        with torch.no_grad():
            z = torch.randn(batchsz, 2).cuda() # [b, 2]
            samples = G(z).cpu().numpy() # [b, 2]
        plt.scatter(xr[:, 0], xr[:, 1], c='orange', marker='.')
        plt.scatter(samples[:, 0], samples[:, 1], c='green', marker='+')
    
        viz.matplot(plt, win='contour', opts=dict(title='p(x):%d'%epoch))
    
    
    def weights_init(m):
        if isinstance(m, nn.Linear):
            # m.weight.data.normal_(0.0, 0.02)
            nn.init.kaiming_normal_(m.weight)
            m.bias.data.fill_(0)
    
    def gradient_penalty(D, xr, xf):
        """
    
        :param D:
        :param xr:
        :param xf:
        :return:
        """
        LAMBDA = 0.3
    
        # only constrait for Discriminator
        xf = xf.detach()
        xr = xr.detach()
    
        # [b, 1] => [b, 2]
        alpha = torch.rand(batchsz, 1).cuda()
        alpha = alpha.expand_as(xr)
    
        interpolates = alpha * xr + ((1 - alpha) * xf)
        interpolates.requires_grad_()
    
        disc_interpolates = D(interpolates)
    
        gradients = autograd.grad(outputs=disc_interpolates, inputs=interpolates,
                                  grad_outputs=torch.ones_like(disc_interpolates),
                                  create_graph=True, retain_graph=True, only_inputs=True)[0]
    
        gp = ((gradients.norm(2, dim=1) - 1) ** 2).mean() * LAMBDA
    
        return gp
    
    def main():
    
        torch.manual_seed(23)
        np.random.seed(23)
    
        G = Generator().cuda()
        D = Discriminator().cuda()
        G.apply(weights_init)
        D.apply(weights_init)
    
        optim_G = optim.Adam(G.parameters(), lr=1e-3, betas=(0.5, 0.9))
        optim_D = optim.Adam(D.parameters(), lr=1e-3, betas=(0.5, 0.9))
    
    
        data_iter = data_generator()
        print('batch:', next(data_iter).shape)
    
        viz.line([[0,0]], [0], win='loss', opts=dict(title='loss',
                                                     legend=['D', 'G']))
    
        for epoch in range(50000):
    
            # 1. train discriminator for k steps
            for _ in range(5):
                x = next(data_iter)
                xr = torch.from_numpy(x).cuda()
    
                # [b]
                predr = (D(xr))
                # max log(lossr)
                lossr = - (predr.mean())
    
                # [b, 2]
                z = torch.randn(batchsz, 2).cuda()
                # stop gradient on G
                # [b, 2]
                xf = G(z).detach()
                # [b]
                predf = (D(xf))
                # min predf
                lossf = (predf.mean())
    
                # gradient penalty
                gp = gradient_penalty(D, xr, xf)
    
                loss_D = lossr + lossf + gp
                optim_D.zero_grad()
                loss_D.backward()
                # for p in D.parameters():
                #     print(p.grad.norm())
                optim_D.step()
    
    
            # 2. train Generator
            z = torch.randn(batchsz, 2).cuda()
            xf = G(z)
            predf = (D(xf))
            # max predf
            loss_G = - (predf.mean())
            optim_G.zero_grad()
            loss_G.backward()
            optim_G.step()
    
    
            if epoch % 100 == 0:
                viz.line([[loss_D.item(), loss_G.item()]], [epoch], win='loss', update='append')
    
                generate_image(D, G, xr, epoch)
    
                print(loss_D.item(), loss_G.item())
    
    if __name__ == '__main__':
        main()
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190
    • 191
    • 192
    • 193
    • 194
    • 195
    • 196
    • 197
    • 198
    • 199
    • 200
    • 201
    • 202
    • 203
    • 204
    • 205
    • 206
    • 207
    • 208
    • 209
    • 210
    • 211
    • 212
    • 213
    • 214
    • 215
    • 216
    • 217
    • 218
    • 219
    • 220
    • 221
    • 222
    • 223
    • 224
    • 225
    • 226
    • 227
    • 228
    • 229
    • 230
    • 231
    • 232
    • 233
    • 234
    • 235
    • 236
    • 237
    • 238
    • 239
    • 240
    • 241
    • 242
    • 243

    二、WGAN

    2、Wasserstein Distance

    4
    W ( P r , P g ) = i n f γ ∈ ∏ ( P r , P g ) E ( x , y ) − γ [ ∣ ∣ x − y ∣ ∣ ] W(\mathbb{P}_r, \mathbb{P}_g) = inf_{\gamma \in \prod (\mathbb{P}_r, \mathbb{P}_g)} \mathbb {E}_{(x, y)-\gamma } [||x - y||] W(Pr,Pg)=infγ(Pr,Pg)E(x,y)γ[∣∣xy∣∣]

    代码如下:

    import  torch
    from    torch import nn, optim, autograd
    import  numpy as np
    import  visdom
    from    torch.nn import functional as F
    from    matplotlib import pyplot as plt
    import  random
    
    h_dim = 400
    batchsz = 512
    viz = visdom.Visdom()
    
    class Generator(nn.Module):
    
        def __init__(self):
            super(Generator, self).__init__()
    
            self.net = nn.Sequential(
                nn.Linear(2, h_dim),
                nn.ReLU(True),
                nn.Linear(h_dim, h_dim),
                nn.ReLU(True),
                nn.Linear(h_dim, h_dim),
                nn.ReLU(True),
                nn.Linear(h_dim, 2),
            )
    
        def forward(self, z):
            output = self.net(z)
            return output
    
    
    class Discriminator(nn.Module):
    
        def __init__(self):
            super(Discriminator, self).__init__()
    
            self.net = nn.Sequential(
                nn.Linear(2, h_dim),
                nn.ReLU(True),
                nn.Linear(h_dim, h_dim),
                nn.ReLU(True),
                nn.Linear(h_dim, h_dim),
                nn.ReLU(True),
                nn.Linear(h_dim, 1),
                nn.Sigmoid()
            )
    
        def forward(self, x):
            output = self.net(x)
            return output.view(-1)
    
    def data_generator():
    
        scale = 2.
        centers = [
            (1, 0),
            (-1, 0),
            (0, 1),
            (0, -1),
            (1. / np.sqrt(2), 1. / np.sqrt(2)),
            (1. / np.sqrt(2), -1. / np.sqrt(2)),
            (-1. / np.sqrt(2), 1. / np.sqrt(2)),
            (-1. / np.sqrt(2), -1. / np.sqrt(2))
        ]
        centers = [(scale * x, scale * y) for x, y in centers]
        while True:
            dataset = []
            for i in range(batchsz):
                point = np.random.randn(2) * .02
                center = random.choice(centers)
                point[0] += center[0]
                point[1] += center[1]
                dataset.append(point)
            dataset = np.array(dataset, dtype='float32')
            dataset /= 1.414  # stdev
            yield dataset
    
        # for i in range(100000//25):
        #     for x in range(-2, 3):
        #         for y in range(-2, 3):
        #             point = np.random.randn(2).astype(np.float32) * 0.05
        #             point[0] += 2 * x
        #             point[1] += 2 * y
        #             dataset.append(point)
        #
        # dataset = np.array(dataset)
        # print('dataset:', dataset.shape)
        # viz.scatter(dataset, win='dataset', opts=dict(title='dataset', webgl=True))
        #
        # while True:
        #     np.random.shuffle(dataset)
        #
        #     for i in range(len(dataset)//batchsz):
        #         yield dataset[i*batchsz : (i+1)*batchsz]
    
    
    def generate_image(D, G, xr, epoch):
        """
        Generates and saves a plot of the true distribution, the generator, and the
        critic.
        """
        N_POINTS = 128
        RANGE = 3
        plt.clf()
    
        points = np.zeros((N_POINTS, N_POINTS, 2), dtype='float32')
        points[:, :, 0] = np.linspace(-RANGE, RANGE, N_POINTS)[:, None]
        points[:, :, 1] = np.linspace(-RANGE, RANGE, N_POINTS)[None, :]
        points = points.reshape((-1, 2))
        # (16384, 2)
        # print('p:', points.shape)
    
        # draw contour
        with torch.no_grad():
            points = torch.Tensor(points).cuda() # [16384, 2]
            disc_map = D(points).cpu().numpy() # [16384]
        x = y = np.linspace(-RANGE, RANGE, N_POINTS)
        cs = plt.contour(x, y, disc_map.reshape((len(x), len(y))).transpose())
        plt.clabel(cs, inline=1, fontsize=10)
        # plt.colorbar()
    
    
        # draw samples
        with torch.no_grad():
            z = torch.randn(batchsz, 2).cuda() # [b, 2]
            samples = G(z).cpu().numpy() # [b, 2]
        plt.scatter(xr[:, 0], xr[:, 1], c='orange', marker='.')
        plt.scatter(samples[:, 0], samples[:, 1], c='green', marker='+')
    
        viz.matplot(plt, win='contour', opts=dict(title='p(x):%d'%epoch))
    
    
    def weights_init(m):
        if isinstance(m, nn.Linear):
            # m.weight.data.normal_(0.0, 0.02)
            nn.init.kaiming_normal_(m.weight)
            m.bias.data.fill_(0)
    
    def gradient_penalty(D, xr, xf):
        """
    
        :param D:
        :param xr:
        :param xf:
        :return:
        """
        LAMBDA = 0.3
    
        # only constrait for Discriminator
        xf = xf.detach()
        xr = xr.detach()
    
        # [b, 1] => [b, 2]
        alpha = torch.rand(batchsz, 1).cuda()
        alpha = alpha.expand_as(xr)
    
        interpolates = alpha * xr + ((1 - alpha) * xf)
        interpolates.requires_grad_()
    
        disc_interpolates = D(interpolates)
    
        gradients = autograd.grad(outputs=disc_interpolates, inputs=interpolates,
                                  grad_outputs=torch.ones_like(disc_interpolates),
                                  create_graph=True, retain_graph=True, only_inputs=True)[0]
    
        gp = ((gradients.norm(2, dim=1) - 1) ** 2).mean() * LAMBDA
    
        return gp
    
    def main():
    
        torch.manual_seed(23)
        np.random.seed(23)
    
        G = Generator().cuda()
        D = Discriminator().cuda()
        G.apply(weights_init)
        D.apply(weights_init)
    
        optim_G = optim.Adam(G.parameters(), lr=1e-3, betas=(0.5, 0.9))
        optim_D = optim.Adam(D.parameters(), lr=1e-3, betas=(0.5, 0.9))
    
    
        data_iter = data_generator()
        print('batch:', next(data_iter).shape)
    
        viz.line([[0,0]], [0], win='loss', opts=dict(title='loss',
                                                     legend=['D', 'G']))
    
        for epoch in range(50000):
    
            # 1. train discriminator for k steps
            for _ in range(5):
                x = next(data_iter)
                xr = torch.from_numpy(x).cuda()
    
                # [b]
                predr = (D(xr))
                # max log(lossr)
                lossr = - (predr.mean())
    
                # [b, 2]
                z = torch.randn(batchsz, 2).cuda()
                # stop gradient on G
                # [b, 2]
                xf = G(z).detach()
                # [b]
                predf = (D(xf))
                # min predf
                lossf = (predf.mean())
    
                # gradient penalty
                gp = gradient_penalty(D, xr, xf)
    
                loss_D = lossr + lossf + gp
                optim_D.zero_grad()
                loss_D.backward()
                # for p in D.parameters():
                #     print(p.grad.norm())
                optim_D.step()
    
    
            # 2. train Generator
            z = torch.randn(batchsz, 2).cuda()
            xf = G(z)
            predf = (D(xf))
            # max predf
            loss_G = - (predf.mean())
            optim_G.zero_grad()
            loss_G.backward()
            optim_G.step()
    
    
            if epoch % 100 == 0:
                viz.line([[loss_D.item(), loss_G.item()]], [epoch], win='loss', update='append')
    
                generate_image(D, G, xr, epoch)
    
                print(loss_D.item(), loss_G.item())
    
    if __name__ == '__main__':
        main()
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190
    • 191
    • 192
    • 193
    • 194
    • 195
    • 196
    • 197
    • 198
    • 199
    • 200
    • 201
    • 202
    • 203
    • 204
    • 205
    • 206
    • 207
    • 208
    • 209
    • 210
    • 211
    • 212
    • 213
    • 214
    • 215
    • 216
    • 217
    • 218
    • 219
    • 220
    • 221
    • 222
    • 223
    • 224
    • 225
    • 226
    • 227
    • 228
    • 229
    • 230
    • 231
    • 232
    • 233
    • 234
    • 235
    • 236
    • 237
    • 238
    • 239
    • 240
    • 241
    • 242
    • 243

  • 相关阅读:
    EDID详解
    Android中使用kotlin进行xutils数据库版本升级
    Pod控制器详解
    阿里云 CLI相关使用笔记
    【JAVA】==和equal的区别
    用于大规模 MIMO 检测的近似消息传递 (AMP)(Matlab代码实现)
    软件设计师——多媒体基础
    Flume 整合 Kafka
    IIC通信
    使用element-ui中的el-table回显已选中数据时toggleRowSelection报错
  • 原文地址:https://blog.csdn.net/Ashen_0nee/article/details/126189371