• [AMEX] LGBM Optuna美国运通信用卡欺诈赛 kaggle


    竞赛描述:

    无论是在餐厅外出还是购买音乐会门票,现代生活都依靠信用卡的便利进行日常购物。它使我们免于携带大量现金,还可以提前全额购买,并且可以随着时间的推移支付。发卡机构如何知道我们会偿还所收取的费用?这是许多现有解决方案的复杂问题,甚至更多潜在的改进,有待在本次比赛中进行探索。

    信用违约预测是管理消费贷款业务风险的核心。信用违约预测允许贷方优化贷款决策,从而带来更好的客户体验和稳健的商业经济。当前的模型可以帮助管理风险。但是有可能创建更好的模型,这些模型的性能优于当前使用的模型。

    美国运通是一家全球综合支付公司。作为世界上最大的支付卡发行商,他们为客户提供丰富生活和建立商业成功的产品、见解和体验。

    在本次比赛中,您将运用机器学习技能来预测信用违约。具体来说,您将利用工业规模的数据集来构建机器学习模型,以挑战生产中的当前模型。训练、验证和测试数据集包括时间序列行为数据和匿名客户档案信息。您可以自由探索任何技术来创建最强大的模型,从创建特征到在模型中以更有机的方式使用数据。

    如果成功,您将更容易获得信用卡批准,从而帮助为持卡人创造更好的客户体验。顶级解决方案可能会挑战世界上最大的支付卡发行商使用的信用违约预测模型——为您赢得现金奖励、接受美国运通公司采访的机会,以及可能获得回报的新职业。

    数据描述:

    本次比赛的目的是根据客户每月的客户资料预测客户未来不偿还信用卡余额的概率。目标二元变量是通过观察最近一次信用卡账单后 18 个月的绩效窗口来计算的,如果客户在最近一次账单日后的 120 天内未支付到期金额,则将其视为违约事件。

    该数据集包含每个客户在每个报表日期的汇总配置文件特征。特征是匿名和规范化的,分为以下一般类别:

    D_* = 拖欠变量 Delinquency variables
    S_* = 支出变量 Spend variables
    P_* = 付款变量 Payment variables
    B_* = 平衡变量Balance variables
    R_* = 风险变量  Risk variables
    具有以下分类特征:

    ['B_30'、'B_38'、'D_114'、'D_116'、'D_117'、'D_120'、'D_126'、'D_63'、'D_64'、'D_66'、'D_68']

    您的任务是为每个 customer_ID 预测未来付款违约的概率(目标 = 1)。

    请注意,该数据集的负类已被二次抽样为 5%,因此在评分指标中获得了 20 倍的权重。

    提交格式:

     

     

    LGBM Optuna Starter

    This notebook shows how to use Optuna in LGBM Models to automate hyper parameter searching.

    References

    (1) AMEX data - integer dtypes - parquet format | Kaggle

    (2) XGBoost Starter - [0.793] | Kaggle

    (3) American Express - Default Prediction | Kaggle

    1. # LOAD LIBRARIES
    2. import pandas as pd, numpy as np # CPU libraries
    3. import cupy, cudf # GPU libraries
    4. import matplotlib.pyplot as plt, gc, os
    5. print('RAPIDS version',cudf.__version__)

    1. # VERSION NAME FOR SAVED MODEL FILES
    2. VER = 1
    3. # TRAIN RANDOM SEED
    4. SEED = 42
    5. # FILL NAN VALUE
    6. NAN_VALUE = -127 # will fit in int8
    7. # FOLDS PER MODEL
    8. FOLDS = 5

    流程和特征工程师训练数据
    我将使用 XGBoost 入门笔记本(2)中介绍的功能和数据预处理方法

    我们将从此处加载@raddar Kaggle 数据集并在此处进行讨论。然后我们将在此处和此处的笔记本中设计@huseyincot 建议的功能。我们将使用 RAPIDS 和 GPU 快速创建新功能。

    1. def read_file(path = '', usecols = None):
    2. # LOAD DATAFRAME
    3. if usecols is not None: df = cudf.read_parquet(path, columns=usecols)
    4. else: df = cudf.read_parquet(path)
    5. # REDUCE DTYPE FOR CUSTOMER AND DATE
    6. df['customer_ID'] = df['customer_ID'].str[-16:].str.hex_to_int().astype('int64')
    7. df.S_2 = cudf.to_datetime( df.S_2 )
    8. # SORT BY CUSTOMER AND DATE (so agg('last') works correctly)
    9. #df = df.sort_values(['customer_ID','S_2'])
    10. #df = df.reset_index(drop=True)
    11. # FILL NAN
    12. df = df.fillna(NAN_VALUE)
    13. print('shape of data:', df.shape)
    14. return df
    1. def process_and_feature_engineer(df):
    2. # FEATURE ENGINEERING FROM
    3. # https://www.kaggle.com/code/huseyincot/amex-agg-data-how-it-created
    4. all_cols = [c for c in list(df.columns) if c not in ['customer_ID','S_2']]
    5. cat_features = ["B_30","B_38","D_114","D_116","D_117","D_120","D_126","D_63","D_64","D_66","D_68"]
    6. num_features = [col for col in all_cols if col not in cat_features]
    7. test_num_agg = df.groupby("customer_ID")[num_features].agg(['mean', 'std', 'min', 'max', 'last'])
    8. test_num_agg.columns = ['_'.join(x) for x in test_num_agg.columns]
    9. test_cat_agg = df.groupby("customer_ID")[cat_features].agg(['count', 'last', 'nunique'])
    10. test_cat_agg.columns = ['_'.join(x) for x in test_cat_agg.columns]
    11. df = cudf.concat([test_num_agg, test_cat_agg], axis=1)
    12. del test_num_agg, test_cat_agg
    13. print('shape after engineering', df.shape )
    14. return df
    1. print('Reading train data...')
    2. TRAIN_PATH = '../input/amex-data-integer-dtypes-parquet-format/train.parquet'
    3. train = read_file(path = TRAIN_PATH)
    4. train = process_and_feature_engineer(train)

    1. targets = cudf.read_csv('../input/amex-default-prediction/train_labels.csv')
    2. targets['customer_ID'] = targets['customer_ID'].str[-16:].str.hex_to_int().astype('int64')
    3. targets.index = targets['customer_ID'].sort_index()
    4. targets = targets.drop('customer_ID', axis=1)
    5. train = train.join(targets,on =['customer_ID'] ).sort_index()
    6. del targets
    7. gc.collect()
    8. # NEEDED TO MAKE CV DETERMINISTIC (cudf merge above randomly shuffles rows)
    9. train = train.sort_index().reset_index()
    10. # FEATURES
    11. FEATURES = train.columns[1:-1]

     Faster metric Implementation

    1. def amex_metric(y_true: np.array, y_pred: np.array) -> float:
    2. # count of positives and negatives
    3. n_pos = y_true.sum()
    4. n_neg = y_true.shape[0] - n_pos
    5. # sorting by descring prediction values
    6. indices = np.argsort(y_pred)[::-1]
    7. preds, target = y_pred[indices], y_true[indices]
    8. # filter the top 4% by cumulative row weights
    9. weight = 20.0 - target * 19.0
    10. cum_norm_weight = (weight / weight.sum()).cumsum()
    11. four_pct_filter = cum_norm_weight <= 0.04
    12. # default rate captured at 4%
    13. d = target[four_pct_filter].sum() / n_pos
    14. # weighted gini coefficient
    15. lorentz = (target / n_pos).cumsum()
    16. gini = ((lorentz - cum_norm_weight) * weight).sum()
    17. # max weighted gini coefficient
    18. gini_max = 10 * n_neg * (1 - 19 / (n_pos + 20 * n_neg))
    19. # normalized weighted gini coefficient
    20. g = gini / gini_max
    21. return 0.5 * (g + d)
    22. def lgb_amex_metric(y_true, y_pred):
    23. return ('Score',
    24. amex_metric(y_true, y_pred),
    25. True)
    1. import datetime
    2. import warnings
    3. import gc
    4. import pickle
    5. import sklearn
    6. from sklearn.model_selection import StratifiedKFold, train_test_split
    7. import lightgbm as lgb

    使用 Optuna 进行超参数调整

    import optuna

    1. train_pd = train.to_pandas()
    2. del train
    3. gc.collect()
    1. train_df, test_df = train_test_split(train_pd, test_size=0.25, stratify=train_pd['target'])
    2. del train_pd
    3. gc.collect()
    1. X_train = train_df.drop(['customer_ID', 'target'], axis=1)
    2. X_test = test_df.drop(['customer_ID', 'target'], axis=1)
    3. y_train = train_df['target']
    4. y_test = test_df['target']
    5. del train_df, test_df
    6. gc.collect()
    1. # 1. Define an objective function to be maximized.
    2. def objective(trial):
    3. dtrain = lgb.Dataset( X_train, label=y_train)
    4. # 2. Suggest values of the hyperparameters using a trial object.
    5. param = {
    6. 'objective': 'binary',
    7. 'metric': 'binary_logloss',
    8. 'seed' : 42,
    9. 'lambda_l1': trial.suggest_float('lambda_l1', 1e-8, 10.0, log=True),
    10. 'lambda_l2': trial.suggest_float('lambda_l2', 1e-8, 10.0, log=True),
    11. 'num_leaves': trial.suggest_int('num_leaves', 2, 256),
    12. 'feature_fraction': trial.suggest_float('feature_fraction', 0.1, 1.0),
    13. 'bagging_fraction': trial.suggest_float('bagging_fraction', 0.1, 1.0),
    14. 'bagging_freq': trial.suggest_int('bagging_freq', 1, 7),
    15. 'min_data_in_leaf': trial.suggest_int('min_child_samples', 5, 100),
    16. 'learning_rate': trial.suggest_float('learning_rate', 0.001, 0.05, step=0.001),
    17. 'device' : 'gpu',
    18. "verbosity": -1,
    19. }
    20. gbm = lgb.train(param, dtrain)
    21. preds = gbm.predict(X_test)
    22. pred_labels = np.rint(preds)
    23. accuracy = sklearn.metrics.accuracy_score(y_test, pred_labels)
    24. return accuracy

  • 相关阅读:
    GEE|typeof、ee.Algorithms.If、ee.Algorithms.IsEqual 语法
    React useState和useEffect
    SpringSecurity6 | 核心过滤器
    系统检测工具
    scada组态软件和硬件结构的主要功能
    非对称密码体制详解
    Chapter3.2:时域分析法
    招投标系统简介 企业电子招投标采购系统源码之电子招投标系统 —降低企业采购成本
    机器学习笔记:adaBoost
    【Linux网络编程】epoll进阶之水平模式和边沿模式
  • 原文地址:https://blog.csdn.net/sinat_37574187/article/details/126085449