相关数据准备参考以下文章中的数据连接Python实践通过使用XGBoost中的尽早停止【Early Stopping】策略来避免过度拟合_Together_CZ的博客-CSDN博客
early_stopping_rounds : int, optional
Activates early stopping. Validation error needs to decrease at
least every round(s) to continue training.
Requires at least one item in evals. If there's more than one,
will use the last. Returns the model from the last iteration
(not the best one). If early stopping occurs, the model will
have three additional fields: bst.best_score, bst.best_iteration
and bst.best_ntree_limit.
(Use bst.best_ntree_limit to get the correct value if num_parallel_tree
and/or num_class appears in the parameters)
翻译过来就是,可以接受多个评估数据集,进行early_stop。
如果只有一个数据集,直接以该数据集进行评估,在达到指定的训练轮次之前,如果评估指标在该数据集上已经early_stopping_rounds没有提升,则停止训练,返回最后一轮迭代的模型,(并不是最好的一个),如果发生early_stop,会有额外三个参数: bst.best_score, bst.best_iteration and bst.best_ntree_limit,进行参考。如果是多个数据集,则以最后一个数据集的评估指标作为参考来评估是否要使用early_stop。
eval_metric,同样支持一个或者多个评估指标。如果eval_metric有多个,跟eval_set采用同样的逻辑,以最后一个metric作为参考。
本文提供两个参数,eval_set,eval_metric,供大家进行如上逻辑的测试:
- import sys
-
- from numpy import loadtxt
- from xgboost import XGBClassifier
- from sklearn.model_selection import train_test_split
- from sklearn.metrics import accuracy_score
- # load data
- dataset = loadtxt('pima-indians-diabetes.csv', delimiter=",")
- # split data into X and y
- X = dataset[:,0:8]
- Y = dataset[:,8]
- # split data into train and test sets
- X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=7)
- # fit model no training data
- model = XGBClassifier(n_estimators=1000)
- eval_set = [(X_train, y_train), (X_test, y_test)]
- early_stopping_rounds = 100
- model.fit(X_train, y_train, early_stopping_rounds=early_stopping_rounds, eval_metric=['logloss',"error"], eval_set=eval_set, verbose=True)
- # make predictions for test data
- y_pred = model.predict(X_test)
- predictions = [round(value) for value in y_pred]
- # evaluate predictions
- accuracy = accuracy_score(y_test, predictions)
- print("Accuracy: %.2f%%" % (accuracy * 100.0))