特效素材免费下载网站,c 网站开发 简单例子,石家庄网页设计制作,公司做网站需要注意什么事情记录了使用for循环实现网格搜索的简单框架。 使用df_search记录每种超参数组合下的运行结果。 lgb_model.best_score返回模型的最佳得分 lgb_model.best_iteration_返回模型的最佳iteration也就是最佳n_extimator
import numpy as np
import pandas as pd
import lightgbm as …记录了使用for循环实现网格搜索的简单框架。 使用df_search记录每种超参数组合下的运行结果。 lgb_model.best_score返回模型的最佳得分 lgb_model.best_iteration_返回模型的最佳iteration也就是最佳n_extimator
import numpy as np
import pandas as pd
import lightgbm as lgbdf pd.read_csv(this_is_train.csv)
df_search_columns [learning_rate, num_leaves, max_depth,subsample,colsample_bytree,best_iteration,best_score]
df_search pd.DataFrame(columnsdf_search_columns )
# colsample_bytree :0.9, learning_rate : 0.001
lgb_params {objective: mae, # maen_estimators: 6000,num_leaves: 256, # 256subsample: 0.6,colsample_bytree: 0.8,learning_rate: 0.00571, # 0.00871max_depth: 11, # 11n_jobs: 4,device: gpu,verbosity: -1,importance_type: gain,
}
for learning_rate in [0.001,0.005,0.01,0.015,0.05]:for num_leaves in [300,256,200,150]:for max_depth in [15,13,11,9,7]:for subsample in [0.8,0.6,0.5]:for colsample_bytree in [0.9,0.8,0.7]:print(flearning_rate : {learning_rate}, num_leaves : {num_leaves}, max_depth{max_depth}, subsample : {subsample}, colsample_bytree : {colsample_bytree})lgb_params[learning_rate] learning_ratelgb_params[num_leaves] num_leaveslgb_params[max_depth] max_depthlgb_params[subsample] subsamplelgb_params[colsample_bytree] colsample_bytree# Train a LightGBM model for the current foldlgb_model lgb.LGBMRegressor(**lgb_params)lgb_model.fit(train_feats,train_target,eval_set[(valid_feats, valid_target)],callbacks[lgb.callback.early_stopping(stopping_rounds100),lgb.callback.log_evaluation(period100),],)best_iteration lgb_model.best_iteration_best_score lgb_model.best_score_cache pd.DataFrame([[learning_rate,num_leaves,max_depth,subsample,colsample_bytree,best_iteration,best_score]],columns[learning_rate, num_leaves, max_depth,subsample,colsample_bytree,best_iteration,best_score])df_search pd.concat([df_search, cache], ignore_indexTrue, axis0)
df_search.to_csv(grid_search.csv,indexFalse)使用该框架需要调整训练数据df部分以及进行网格的备选数据和lightgbm的超参数。 每次运行的数据通过一下代码进行记录
cache pd.DataFrame([[learning_rate,num_leaves,max_depth,subsample,colsample_bytree,best_iteration,best_score]],\columnsdf_search_columns )
df_search pd.concat([df_search, cache], ignore_indexTrue, axis0)