ML之lightgbm.sklearn:LGBMClassifier函数的简介、具体案
例、。。。
ML之lightgbm.sklearn:LGBMClassifier函数的简介、具体案例、调参技巧之详细攻略
⽬录
LGBMClassifier函数的简介、具体案例、调参技巧
LGBMClassifier函数的调参技巧
1、lightGBM适合较⼤数据集的样本
⽽对于较⼩的数据集(<10000条记录),lightGBM可能不是最佳选择。所以,如果进⾏调优lightgbm参数,这可能没有帮助。
2、建议使⽤更⼩的learning_rate和更⼤的num_iteration
此外,如果您想要更⾼的num_iteration,那么您应该使⽤early_stopping_rounds,以便在⽆法学习任何有⽤的内容时停⽌训练。
3、样本不平衡调参技巧
lightgbm中,可以设置两个参数is_unbalance和scale_pos_weight。
is_unbalace:当其为True时,算法将尝试⾃动平衡占主导地位的标签的权重(使⽤列集中的pos/neg分数)
山西省电子税务局官网
scale_pos_weight:默认1,即假设正负标签都是相等的。在不平衡数据集的情况下,建议使⽤以下公式:
sample_pos_weight = number of negative samples / number of positive samples
4、调参时,可将参数字典分为两⼤类
调优参数arch_params = {'learning_rate': 0.4,
德园包子'max_depth': 15,
'num_leaves': 20,
'feature_fraction': 0.8,
'subsample': 0.2}
固定参数fixed_params={'objective': 'binary',
'metric': 'auc',
'is_unbalance':True,
'boosting':'gbdt',
'num_boost_round':300,
'early_stopping_rounds':30}
'early_stopping_rounds':30}
LGBMClassifier函数简介喂宝宝母乳
LightGBM原论⽂:
1、所有弱学习器的参数
因LightGBM使⽤的是leaf-wi的算法,因此在调节树的复杂程度时,使⽤的是num_leaves⽽不是
max_depth。它们之间⼤致换算关系:num_leaves = 2^(max_depth)。即它的值的设置应该⼩于
2^(max_depth),否则会进⾏警告,可能会导致过拟合。
bagging_freq=15控制过拟合。
bagging_fraction= 0.8⼦样例,来控制过拟合。
可以指定每个树构建迭代使⽤的⾏数百分⽐。这意味着将随机选择⼀些⾏来匹配每个学习者(树)。这不仅提
⾼了泛化能⼒,也提⾼了训练速度。
会嫁好老公的手相feature_fraction=0.8⼦特征处理列采样,来控制过拟合。
它将在每次迭代(树)上随机选择特征⼦集。例如,如果将其设置为0.8,它将在训练每棵树之前选择60%的
特性。它常⽤来加速训练和处理过拟合。
subsample=1.0训练样本采样率,⾏
colsample_bytree=1.0训练特征采样率,列
subsample_freq=1⼦样本频率
reg_alpha=0.5L1正则化系数
reg_lambda=0.5L2正则化系数
min_split_gain=0.0最⼩分割增益
min_child_weight=0.001分⽀结点的最⼩权重
min_child_samples=20
0是正数还是负数random_state=None随机种⼦数
n_jobs=-1并⾏运⾏多线程核⼼数
silent=True训练过程是否打印⽇志信息
verbo=-1
2、具体函数解释
lf._class = lf._le.class_
lf._n_class = len(lf._class)
if lf._n_class > 2:
# Switch to using a multiclass objective in the underlying LGBM
instance
ova_alias = "multiclassova", "multiclass_ova", "ova", "ovr"
if lf._objective not in ova_alias and not
callable(lf.
_objective):
lf._objective = "multiclass"
if eval_metric in ('logloss', 'binary_logloss'):
eval_metric = "multi_logloss"
elif eval_metric in ('error', 'binary_error'):
eval_metric = "multi_error"
elif eval_metric in ('logloss', 'multi_logloss'):
eval_metric = 'binary_logloss'
elif eval_metric in ('error', 'multi_error'):
eval_metric = 'binary_error'
if eval_t is not None:
if isinstance(eval_t, tuple):
eval_t = [eval_t]
for i, (valid_x, valid_y) in enumerate(eval_t):
if valid_x is X and valid_y is y:
eval_t[i] = valid_x, _y
el:
eval_t[i] = valid_x, lf._le.transform(valid_y)
super(LGBMClassifier, lf).fit(X, _y,
sample_weight=sample_weight,
init_score=init_score, eval_t=eval_t,
eval_names=eval_names,
eval_sample_weight=eval_sample_weight,
eval_class_weight=eval_class_weight,
eval_init_score=eval_init_score,
eval_metric=eval_metric,
early_stopping_rounds=early_stopping_rounds,
verbo=verbo,
feature_name=feature_name,
categorical_feature=categorical_feature,
callbacks=callbacks)
return lf
fit.__doc__ = LGBMModel.fit.__doc__
def predict(lf, X, raw_score=Fal, num_iteration=None, pred_leaf=Fal, pred_contrib=Fal, **kwargs):
"""Docstring is inherited from the LGBMModel."""
result = lf.predict_proba(X, raw_score, num_iteration, pred_leaf, pred_contrib, **kwargs)
if raw_score or pred_leaf or pred_contrib:
return result
el:
class_index = np.argmax(result, axis=1)
return lf._le.inver_transform(class_index)
predict.__doc__ = LGBMModel.predict.__doc__
def predict_proba(lf, X, raw_score=Fal,
num_iteration=None,
pred_leaf=Fal, pred_contrib=Fal, **kwargs):
"""Return the predicted probability for each class for each sample. 返回每个类和每个样本的预测概率。
Parameters
----------
X : array-like or spar matrix of shape = [n_samples,
n_features] Input features matrix.
raw_score : bool, optional (default=Fal). Whether to predict raw scores.
num_iteration : int or None, optional (default=None).
Limit number of iterations in the prediction. If None, if the
best iteration exists, it is ud; otherwi, all trees are ud. If <= 0, all trees are ud (no limits).
pred_leaf : bool, optional (default=Fal). Whether to predict leaf index.
pred_contrib : bool, optional (default=Fal). Whether to predict feature contributions.
Note
----
If you want to get more explanation for your model's predictions using SHAP values like SHAP interaction
聊天置顶values, you can install shap package
(/slundberg/shap).
**kwargs
Other parameters for the prediction.内资企业
Returns
-------
predicted_probability : array-like of shape = [n_samples, n_class]. The predicted probability for each class for each sample.
X_leaves : array-like of shape = [n_samples, n_trees *
n_class]. If ``pred_leaf=True``, the predicted leaf every tree for each sample.
X_SHAP_values : array-like of shape = [n_samples,
(n_features + 1) *n_class]. If ``pred_contrib=True``, the each feature contributions for each sample.
"""
result = super(LGBMClassifier, lf).predict(X, raw_score, num_iteration, pred_leaf, pred_contrib, **kwargs). if
lf._n_class > 2 or pred_leaf or pred_contrib:
return result
el:
return np.vstack((1. - result, result)).transpo()参数
----------
X: 类数组形状或稀疏矩阵= [n_samples, n_features]输⼊特征矩阵。
raw_score : bool,可选(默认=Fal)。是否预测原始分数。num_iteration: int或None,可选(默认=N
one)。预测中的极限迭代次数。如果没有,如果存在最好的迭代,就使⽤它;否则,使⽤所有树。如果<= 0,则使⽤所有树(没有限制)。
pred_leaf: bool,可选(默认=Fal)。是否预测叶指数。
pred_contrib: bool,可选(默认=Fal)。是否预测特性贡献。请注意
----
如果您想使⽤SHAP值(如SHAP交互值)对模型的预测进⾏更多的解释,您可以安装SHAP包
(/slundberg/shap)。
* * kwargs
⽤于预测的其他参数。
返回
-------
predicted_probability : 类数组形状= [n_samples,
n_class]。每个类,每个样本的预测概率。
X_leaves: 类数组的形状= [n_samples, n_trees *
n_class]。如果' ' pred_leaf=True ' ',则为每个样本的每棵树预测的叶。
X_SHAP_values: 类数组形状= [n_samples, (n_features + 1) *n_class]。如果' ' pred_contrib=True ' ',则每个特性为每个⽰例贡献。
"""
result = super(LGBMClassifier, lf).predict(X,
raw_score, num_iteration, pred_leaf, pred_contrib,
花言巧语的意思**kwargs). if lf._n_class > 2 or pred_leaf or
pred_contrib:
return result
el:
return np.vstack((1. - result, result)).transpo()
@property
def class_(lf):
"""Get the class label array."""
if lf._class is None:
rai LGBMNotFittedError('No class found. Need to