• 机器学习---增量学习


    1. 增量学习

    增量学习作为机器学习的一种方法,现阶段得到广泛的关注。在增量学习中,输入数据不断被用于

    扩展现有模型的知识,即进一步训练模型,它代表了一种动态的学习的技术。对于满足以下条件的

    学习方法可以定义为增量学习方法:可以学习新的信息中的有用信息;不需要访问已经用于训练分

    类器的原始数据;对已经学习的知识具有记忆功能;在面对新数据中包含的新类别时,可以有效地

    进行处理。许多机器学习的算法可以应用增量学习例如:决策树,规则学习、神经网络(RBF

    networks,Learn++Fuzzy ARTMAPTopoARTIGNG以及增量SVM等。learn++算法是一种

    适用于监督学习的、集成的、增量学习的、能学习新类的算法。

    增量算法经常应用于对数据流或大数据的处理,比如对股票趋势的预测和用户偏好的分析等。在这

    些数据流中,新的数据可以持续地输入到模型中来完善模型。此外,将增量学习应用于聚类问题,

    维度约减,特征选择,数据表示强化学习,数据挖掘等等。随着数据库以及互联网技术的快速发展

    和广泛应用,社会各部门积累了海量数据,而且这些数据量每天都在快速增加。通过使用增量学习

    的方式可以有效的利用新增数据来对模型进行训练和进一步完善。此外,通过使用增量学习的方法

    可以从系统层面上更好地理解和模仿人脑学习方式和生物神经网络的构成机制,为开发新计算模型

    和有效学习算法提供技术基础。

    假设有200条数据,第一次训练150条,第二次训练50条,和直接用200条训练的差异在于:在第二

    次训练50条时,前150条数据已经不存在了,模型更拟合于后面的数据。如果我们定期增量训练,

    那么离当前时间越近的数据对模型影响越大,这也是我们想要的结果。但如果最后一批数据质量非

    常差,就可能覆盖之前的正确实例的训练结果,把模型带偏。同理,如果我们按时间把数据分成几

    部分,然后按从早到晚的顺序多次训练模型,每个模型在上一个模型基础上训练,也间接地参加了

    后期实例的权重

    2. Xgboost

    Xgboost提供两种增量训练的方式,一种是在当前迭代树的基础上增加新树,原树不变;另一种是

    当前迭代树结构不变,重新计算叶节点权重,同时也可增加新树。

    1. import xgboost as xgb
    2. from sklearn.datasets import load_digits # 训练数据
    3. xgb_params_01 = {}
    4. digits_2class = load_digits(2)
    5. X_2class = digits_2class['data']
    6. y_2class = digits_2class['target']
    7. dtrain_2class = xgb.DMatrix(X_2class, label=y_2class) #加载数据
    8. gbdt_03 = xgb.train(xgb_params_01, dtrain_2class, num_boost_round=3) # 训练三棵树的模型
    9. print(gbdt_03.get_dump()) # 显示模型
    10. gbdt_03a = xgb.train(xgb_params_01, dtrain_2class, num_boost_round=7, xgb_model=gbdt_03) # 在原模型基础上继续训练
    11. print(gbdt_03a.get_dump())

    使用XGBoost库来训练一个基于梯度提升决策树(Gradient Boosting Decision Tree)的分类模

    型,并使用sklearn中的load_digits数据集作为训练数据。

    load_digits函数用于加载手写数字数据集。

    xgb_params_01 = {},初始化一个空的字典xgb_params_01,用于存储XGBoost模型的参数。在

    这里,该字典为空,意味着将使用XGBoost的默认参数。

    digits_2class = load_digits(2)使用load_digits函数加载手写数字数据集,并只选择其中的两类(数

    字0和1)作为训练数据。

    X_2class = digits_2class['data']从digits_2class中提取特征数据,并将其存储在X_2class中。

    y_2class = digits_2class['target']从digits_2class中提取目标标签(即每个手写数字的实际值),并

    将其存储在y_2class中。

    dtrain_2class = xgb.DMatrix(X_2class, label=y_2class)使用XGBoost的DMatrix数据结构将特征数

    据X_2class和目标标签y_2class转化为XGBoost可以识别的格式。

    gbdt_03 = xgb.train(xgb_params_01, dtrain_2class, num_boost_round=3)使用XGBoost的train函

    数训练一个模型。这里,我们使用前面定义的空参数字典xgb_params_01,训练数据

    dtrain_2class,并指定训练3轮(即3棵决策树)。

    print(gbdt_03.get_dump())打印训练得到的模型的结构和权重。get_dump()方法会返回一个字符

    串,其中包含每棵决策树的结构和权重信息。

    gbdt_03a = xgb.train(xgb_params_01, dtrain_2class, num_boost_round=7, xgb_model=gbdt_03)

    在已经训练了3轮的模型gbdt_03的基础上,继续训练模型。这次,我们指定再训练7轮,总共训练

    10轮(3轮 + 7轮)。

    print(gbdt_03a.get_dump())打印继续训练后的模型的结构和权重。

    3. sklearn

    sklearn中提供了很多增量学习算法,虽然不是所有的算法都可以增量学习,但是学习器提供了

    partial_fit的函数的都可以进行增量学习。增量学习也适用于数据量非常大的情景,使用小batch

    数据中进行增量学习(有时候也称为online learning是增量学习方式的核心,能让任何一段时间

    内内存中只有少量的数据。 

    1. import numpy as np
    2. # Finds a dictionary (a set of atoms) that can best be used to represent data using a sparse code.
    3. from sklearn.decomposition import MiniBatchDictionaryLearning
    4. # Linear dimensionality reduction using Singular Value Decomposition of centered data,
    5. # keeping only the most significant singular vectors to project the data to a lower dimensional space.
    6. from sklearn.decomposition import IncrementalPCA
    7. # Latent Dirichlet(狄利克雷) Allocation with online variational Bayes algorithm
    8. from sklearn.decomposition import LatentDirichletAllocation
    9. from sklearn.datasets import load_iris
    10. from sklearn.datasets import load_digits
    11. iris = load_iris()
    12. X = iris.data
    13. Y = iris.target
    14. permutation = np.random.permutation(X.shape[0])
    15. shuffled_X = X[permutation, :]
    16. shuffled_Y = Y[permutation]
    17. X_train = shuffled_X[:int(X.shape[0]*0.5),]
    18. Y_train = shuffled_Y[:int(X.shape[0]*0.5)]
    19. X_incr = shuffled_X[int(X.shape[0]*0.5):int(X.shape[0]*0.7),]
    20. Y_incr = shuffled_Y[int(X.shape[0]*0.5):int(X.shape[0]*0.7)]
    21. X_test = shuffled_X[int(X.shape[0]*0.7):,]
    22. Y_test = shuffled_Y[int(X.shape[0]*0.7):]
    23. print("shape of X_train:{}, {}".format(X_train.shape[0], X_train.shape[1]))
    24. print("shape of X_incr:{}, {}".format(X_incr.shape[0], X_incr.shape[1]))
    25. print("shape of X_test:{}, {}".format(X_test.shape[0], X_test.shape[1]))
    26. # MiniBatchDictionaryLearning()
    27. model = MiniBatchDictionaryLearning()
    28. model.fit(X_train, Y_train)
    29. # acc1 = model.score(X_test, Y_test)
    30. model.partial_fit(X_incr, Y_incr)
    31. # acc2 = model.score(X_test, Y_test)
    32. # print("MiniBatchDictionaryLearning\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))
    33. # IncrementalPCA()
    34. model = IncrementalPCA()
    35. model.fit(X_train, Y_train)
    36. # acc1 = model.score(X_test, Y_test)
    37. model.partial_fit(X_incr, Y_incr)
    38. # acc2 = model.score(X_test, Y_test)
    39. # # print("IncrementalPCA\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))
    40. # LatentDirichletAllocation()
    41. model = LatentDirichletAllocation()
    42. model.fit(X_train, Y_train)
    43. # acc1 = model.score(X_test, Y_test)
    44. model.partial_fit(X_incr, Y_incr)
    45. # acc2 = model.score(X_test, Y_test)
    46. # print("LatentDirichletAllocation\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))

    加载鸢尾花数据集,并将其分为训练集、增量学习集和测试集。首先,它导入了所需的库和模块,

    然后加载了鸢尾花数据集。然后,将数据集打乱并划分为三个子集:训练集(50%)、增量学习

    集(20%)和测试集(30%)。最后,打印出每个子集的形状。 

    使用三种不同的模型进行增量学习。首先,它使用MiniBatchDictionaryLearning()模型,然后使用

    IncrementalPCA()模型,最后使用LatentDirichletAllocation()模型。在每个模型中,首先使用fit()方

    法对训练数据进行拟合,然后使用partial_fit()方法对增量数据进行拟合。注释掉的代码部分是用于

    计算和打印每个模型在初始阶段和增量学习后的准确率。

    1. from sklearn.cluster import MiniBatchKMeans
    2. import numpy as np
    3. X = np.array([[1, 2], [1, 4], [1, 0],
    4. [4, 2], [4, 0], [4, 4],
    5. [4, 5], [0, 1], [2, 2],
    6. [3, 2], [5, 5], [1, -1]])
    7. print(X.shape)
    8. # manually fit on batches
    9. kmeans = MiniBatchKMeans(n_clusters=2, random_state=0, batch_size=6)
    10. kmeans = kmeans.partial_fit(X[0:6,:])
    11. kmeans = kmeans.partial_fit(X[6:12,:])
    12. print(kmeans.cluster_centers_)
    13. print(kmeans.predict([[0, 0], [4, 4]]))
    14. # fit on the whole data
    15. kmeans = MiniBatchKMeans(n_clusters=2, random_state=0, batch_size=6, max_iter=10).fit(X)
    16. print(kmeans.cluster_centers_)
    17. print(kmeans.predict([[0, 0], [4, 4]]))

     

    首先导入了MiniBatchKMeans类和numpy库。然后创建了一个包含12个样本的二维数组X。接下

    来,通过调用MiniBatchKMeans类的构造函数来创建一个kmeans对象,其中指定了聚类数为2、随

    机种子为0和每次迭代的批次大小为6。然后,通过调用partial_fit方法来手动分批拟合数据。首先

    拟合前6个样本,然后再拟合后6个样本。最后,打印出聚类中心点和预测新数据点的所属簇。接

    着,使用整个数据集进行拟合,通过调用fit方法来完成。同样地,打印出聚类中心点和预测新数据

    点的所属簇。

    1. import numpy as np
    2. from sklearn.naive_bayes import MultinomialNB
    3. from sklearn.naive_bayes import BernoulliNB
    4. from sklearn.linear_model import Perceptron
    5. from sklearn.linear_model import SGDClassifier
    6. from sklearn.linear_model import PassiveAggressiveClassifier
    7. # introduction of dataset : https://scikit-learn.org/stable/modules/classes.html#module-sklearn.datasets
    8. from sklearn.datasets import load_iris
    9. from sklearn.datasets import load_digits
    10. iris = load_iris()
    11. X = iris.data
    12. Y = iris.target
    13. permutation = np.random.permutation(X.shape[0])
    14. shuffled_X = X[permutation, :]
    15. shuffled_Y = Y[permutation]
    16. X_train = shuffled_X[:int(X.shape[0]*0.2),]
    17. Y_train = shuffled_Y[:int(X.shape[0]*0.2)]
    18. X_incr = shuffled_X[int(X.shape[0]*0.2):int(X.shape[0]*0.7),]
    19. Y_incr = shuffled_Y[int(X.shape[0]*0.2):int(X.shape[0]*0.7)]
    20. X_test = shuffled_X[int(X.shape[0]*0.7):,]
    21. Y_test = shuffled_Y[int(X.shape[0]*0.7):]
    22. print("shape of X_train:{}, {}".format(X_train.shape[0], X_train.shape[1]))
    23. print("shape of X_incr:{}, {}".format(X_incr.shape[0], X_incr.shape[1]))
    24. print("shape of X_test:{}, {}".format(X_test.shape[0], X_test.shape[1]))
    25. # MultinomialNB()
    26. model = MultinomialNB()
    27. model.fit(X_train, Y_train)
    28. acc1 = model.score(X_test, Y_test)
    29. model.partial_fit(X_incr, Y_incr)
    30. acc2 = model.score(X_test, Y_test)
    31. print("MultinomialNB for iris\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))
    32. # BernoulliNB()
    33. model = BernoulliNB()
    34. model.fit(X_train, Y_train)
    35. acc1 = model.score(X_test, Y_test)
    36. model.partial_fit(X_incr, Y_incr)
    37. acc2 = model.score(X_test, Y_test)
    38. print("BernoulliNB for iris\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))
    39. # Perceptron()
    40. model = Perceptron()
    41. model.fit(X_train, Y_train)
    42. acc1 = model.score(X_test, Y_test)
    43. model.partial_fit(X_incr, Y_incr)
    44. acc2 = model.score(X_test, Y_test)
    45. print("Perceptron for iris\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))
    46. # SGDClassifier()
    47. model = SGDClassifier()
    48. model.fit(X_train, Y_train)
    49. acc1 = model.score(X_test, Y_test)
    50. model.partial_fit(X_incr, Y_incr)
    51. acc2 = model.score(X_test, Y_test)
    52. print("SGDClassifier for iris\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))
    53. # PassiveAggressiveClassifier()
    54. model = PassiveAggressiveClassifier()
    55. model.fit(X_train, Y_train)
    56. acc1 = model.score(X_test, Y_test)
    57. model.partial_fit(X_incr, Y_incr)
    58. acc2 = model.score(X_test, Y_test)
    59. print("PassiveAggressiveClassifier for iris\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))
    60. digits = load_digits()
    61. X = digits.data
    62. Y = digits.target
    63. permutation = np.random.permutation(X.shape[0])
    64. shuffled_X = X[permutation, :]
    65. shuffled_Y = Y[permutation]
    66. X_train = shuffled_X[:int(X.shape[0]*0.2),]
    67. Y_train = shuffled_Y[:int(X.shape[0]*0.2)]
    68. X_incr = shuffled_X[int(X.shape[0]*0.2):int(X.shape[0]*0.7),]
    69. Y_incr = shuffled_Y[int(X.shape[0]*0.2):int(X.shape[0]*0.7)]
    70. X_test = shuffled_X[int(X.shape[0]*0.7):,]
    71. Y_test = shuffled_Y[int(X.shape[0]*0.7):]
    72. print("shape of X_train:{}, {}".format(X_train.shape[0], X_train.shape[1]))
    73. print("shape of X_incr:{}, {}".format(X_incr.shape[0], X_incr.shape[1]))
    74. print("shape of X_test:{}, {}".format(X_test.shape[0], X_test.shape[1]))
    75. # MultinomialNB()
    76. model = MultinomialNB()
    77. model.fit(X_train, Y_train)
    78. acc1 = model.score(X_test, Y_test)
    79. model.partial_fit(X_incr, Y_incr)
    80. acc2 = model.score(X_test, Y_test)
    81. print("MultinomialNB for digits\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))
    82. # BernoulliNB()
    83. model = BernoulliNB()
    84. model.fit(X_train, Y_train)
    85. acc1 = model.score(X_test, Y_test)
    86. model.partial_fit(X_incr, Y_incr)
    87. acc2 = model.score(X_test, Y_test)
    88. print("BernoulliNB for digits\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))
    89. # Perceptron()
    90. model = Perceptron()
    91. model.fit(X_train, Y_train)
    92. acc1 = model.score(X_test, Y_test)
    93. model.partial_fit(X_incr, Y_incr)
    94. acc2 = model.score(X_test, Y_test)
    95. print("Perceptron for digits\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))
    96. # SGDClassifier()
    97. model = SGDClassifier()
    98. model.fit(X_train, Y_train)
    99. acc1 = model.score(X_test, Y_test)
    100. model.partial_fit(X_incr, Y_incr)
    101. acc2 = model.score(X_test, Y_test)
    102. print("SGDClassifier for digits\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))
    103. # PassiveAggressiveClassifier()
    104. model = PassiveAggressiveClassifier()
    105. model.fit(X_train, Y_train)
    106. acc1 = model.score(X_test, Y_test)
    107. model.partial_fit(X_incr, Y_incr)
    108. acc2 = model.score(X_test, Y_test)
    109. print("PassiveAggressiveClassifier for digits\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))

    用于比较不同分类器在两个数据集(鸢尾花数据集和手写数字数据集)上的表现。首先,加载鸢尾

    花数据集和手写数字数据集,并将数据集划分为训练集、增量学习集和测试集。然后,使用多种分

    类器(SGDClassifier、PassiveAggressiveClassifier、MultinomialNB、BernoulliNB和

    Perceptron)分别对这两个数据集进行训练和评估。最后,计算每个分类器在初始训练集上的准确

    率以及在增量学习后的准确率,并将结果打印出来。 

  • 相关阅读:
    C++ 指针的算术运算
    Linux编译SDK时报错
    【UE5】通过C++代码创建蓝图对象
    微前端基础知识
    记一次中间件宕机以后持续请求导致应用OOM的排查思路(server.max-http-header-size属性配置不当的严重后果)
    Pytest----caplog的应用场景以及使用方法
    [ Linux ] Linux信号概述 信号的产生
    【MySQL数据库】- 多表查询
    java计算机毕业设计ssm+vue酒店VIP客户管理系统
    在屏幕上打印杨辉三角
  • 原文地址:https://blog.csdn.net/weixin_43961909/article/details/136313636