This page gives the Python API reference of xgboost, please also refer to Python Package Introduction for more information about python package.
Context manager for global XGBoost configuration.
Global configuration consists of a collection of parameters that can be applied in the global scope. See https://xgboost.readthedocs.io/en/stable/parameter.html for the full list of parameters supported in the global configuration.
Note
All settings, not just those presently modified, will be returned to their previous values when the context manager is exited. This is not thread-safe.
New in version 1.4.0.
new_config (Dict[str, Any]) – Keyword arguments representing the parameters and their values
Example
import xgboost as xgb
# Show all messages, including ones pertaining to debugging
xgb.set_config(verbosity=2)
# Get current value of global configuration
# This is a dict containing all parameters in the global configuration,
# including 'verbosity'
config = xgb.get_config()
assert config['verbosity'] == 2
# Example of using the context manager xgb.config_context().
# The context manager will restore the previous value of the global
# configuration upon exiting.
with xgb.config_context(verbosity=0):
# Suppress warning caused by model generated with XGBoost version < 1.0.0
bst = xgb.Booster(model_file='./old_model.bin')
assert xgb.get_config()['verbosity'] == 2 # old value restored
See also
set_config
Set global XGBoost configuration
get_config
Get current values of the global configuration
Set global configuration.
Global configuration consists of a collection of parameters that can be applied in the global scope. See https://xgboost.readthedocs.io/en/stable/parameter.html for the full list of parameters supported in the global configuration.
New in version 1.4.0.
new_config (Dict[str, Any]) – Keyword arguments representing the parameters and their values
Example
import xgboost as xgb
# Show all messages, including ones pertaining to debugging
xgb.set_config(verbosity=2)
# Get current value of global configuration
# This is a dict containing all parameters in the global configuration,
# including 'verbosity'
config = xgb.get_config()
assert config['verbosity'] == 2
# Example of using the context manager xgb.config_context().
# The context manager will restore the previous value of the global
# configuration upon exiting.
with xgb.config_context(verbosity=0):
# Suppress warning caused by model generated with XGBoost version < 1.0.0
bst = xgb.Booster(model_file='./old_model.bin')
assert xgb.get_config()['verbosity'] == 2 # old value restored
Get current values of the global configuration.
Global configuration consists of a collection of parameters that can be applied in the global scope. See https://xgboost.readthedocs.io/en/stable/parameter.html for the full list of parameters supported in the global configuration.
New in version 1.4.0.
args – The list of global parameters and their values
Dict[str, Any]
Example
import xgboost as xgb
# Show all messages, including ones pertaining to debugging
xgb.set_config(verbosity=2)
# Get current value of global configuration
# This is a dict containing all parameters in the global configuration,
# including 'verbosity'
config = xgb.get_config()
assert config['verbosity'] == 2
# Example of using the context manager xgb.config_context().
# The context manager will restore the previous value of the global
# configuration upon exiting.
with xgb.config_context(verbosity=0):
# Suppress warning caused by model generated with XGBoost version < 1.0.0
bst = xgb.Booster(model_file='./old_model.bin')
assert xgb.get_config()['verbosity'] == 2 # old value restored
Core XGBoost Library.
Bases: object
Data Matrix used in XGBoost.
DMatrix is an internal data structure that is used by XGBoost, which is optimized for both memory efficiency and training speed. You can construct DMatrix from multiple different sources of data.
data (os.PathLike/string/numpy.array/scipy.sparse/pd.DataFrame/) – dt.Frame/cudf.DataFrame/cupy.array/dlpack Data source of DMatrix. When data is string or os.PathLike type, it represents the path libsvm format txt file, csv file (by specifying uri parameter ‘path_to_csv?format=csv’), or binary file that xgboost can read from.
label (array_like) – Label of the training data.
weight (array_like) –
Weight for each instance.
Note
For ranking task, weights are per-group.
In ranking task, one weight is assigned to each group (not each data point). This is because we only care about the relative ordering of data points within each group, so it doesn’t make sense to assign weights to individual data points.
base_margin (array_like) – Base margin used for boosting from existing model.
missing (float, optional) – Value in the input data which needs to be present as a missing value. If None, defaults to np.nan.
silent (boolean, optional) – Whether print messages during construction
feature_names (list, optional) – Set names for features.
feature_types (list, optional) – Set types for features.
nthread (integer, optional) – Number of threads to use for loading data when parallelization is applicable. If -1, uses maximum threads available on the system.
group (array_like) – Group size for all ranking group.
qid (array_like) – Query ID for data samples, used for ranking.
label_lower_bound (array_like) – Lower bound for survival training.
label_upper_bound (array_like) – Upper bound for survival training.
feature_weights (array_like, optional) – Set feature weights for column sampling.
enable_categorical (boolean, optional) –
New in version 1.3.0.
Experimental support of specializing for categorical features. Do not set to True unless you are interested in development. Currently it’s only available for gpu_hist tree method with 1 vs rest (one hot) categorical split. Also, JSON serialization format, gpu_predictor and pandas input are required.
Get feature names (column labels).
Get feature types (column types).
Get float property from the DMatrix.
field (str) – The field name of the information
info – a numpy array of float information of the data
array
Get the label of the DMatrix.
label
array
Get unsigned integer property from the DMatrix.
field (str) – The field name of the information
info – a numpy array of unsigned integer information of the data
array
Get the weight of the DMatrix.
weight
array
Get the number of columns (features) in the DMatrix.
number of columns
Save DMatrix to an XGBoost buffer. Saved binary can be later loaded
by providing the path to xgboost.DMatrix()
as input.
fname (string or os.PathLike) – Name of the output buffer file.
silent (bool (optional; default: True)) – If set, the output is suppressed.
Set base margin of booster to start from.
This can be used to specify a prediction value of existing model to be base_margin However, remember margin is needed, instead of transformed prediction e.g. for logistic regression: need to put in value before logistic transformation see also example/demo.py
margin (array like) – Prediction margin of each datapoint
Set float type property into the DMatrix.
field (str) – The field name of the information
data (numpy array) – The array of data to be set
for numpy 2d array input
field (str) – The field name of the information
data (numpy array) – The array of data to be set
Set group size of DMatrix (used for ranking).
group (array like) – Group size of each group
Set meta info for DMatrix. See doc string for xgboost.DMatrix
.
Set label of dmatrix
label (array like) – The label information to be set into DMatrix
Set uint type property into the DMatrix.
field (str) – The field name of the information
data (numpy array) – The array of data to be set
Set weight of each instance.
weight (array like) –
Weight for each data point
Note
For ranking task, weights are per-group.
In ranking task, one weight is assigned to each group (not each data point). This is because we only care about the relative ordering of data points within each group, so it doesn’t make sense to assign weights to individual data points.
Slice the DMatrix and return a new DMatrix that only contains rindex.
rindex (Union[List[int], numpy.ndarray]) – List of indices to be selected.
allow_groups (bool) – Allow slicing of a matrix with a groups attribute
A new DMatrix containing only selected indices.
res
Bases: xgboost.core.DMatrix
Device memory Data Matrix used in XGBoost for training with tree_method=’gpu_hist’. Do
not use this for test/validation tasks as some information may be lost in
quantisation. This DMatrix is primarily designed to save memory in training from
device memory inputs by avoiding intermediate storage. Set max_bin to control the
number of bins during quantisation. See doc string in xgboost.DMatrix
for
documents on meta info.
You can construct DeviceQuantileDMatrix from cupy/cudf/dlpack.
New in version 1.1.0.
data (os.PathLike/string/numpy.array/scipy.sparse/pd.DataFrame/) – dt.Frame/cudf.DataFrame/cupy.array/dlpack Data source of DMatrix. When data is string or os.PathLike type, it represents the path libsvm format txt file, csv file (by specifying uri parameter ‘path_to_csv?format=csv’), or binary file that xgboost can read from.
label (array_like) – Label of the training data.
weight (array_like) –
Weight for each instance.
Note
For ranking task, weights are per-group.
In ranking task, one weight is assigned to each group (not each data point). This is because we only care about the relative ordering of data points within each group, so it doesn’t make sense to assign weights to individual data points.
base_margin (array_like) – Base margin used for boosting from existing model.
missing (float, optional) – Value in the input data which needs to be present as a missing value. If None, defaults to np.nan.
silent (boolean, optional) – Whether print messages during construction
feature_names (list, optional) – Set names for features.
feature_types (list, optional) – Set types for features.
nthread (integer, optional) – Number of threads to use for loading data when parallelization is applicable. If -1, uses maximum threads available on the system.
group (array_like) – Group size for all ranking group.
qid (array_like) – Query ID for data samples, used for ranking.
label_lower_bound (array_like) – Lower bound for survival training.
label_upper_bound (array_like) – Upper bound for survival training.
feature_weights (array_like, optional) – Set feature weights for column sampling.
enable_categorical (boolean, optional) –
New in version 1.3.0.
Experimental support of specializing for categorical features. Do not set to True unless you are interested in development. Currently it’s only available for gpu_hist tree method with 1 vs rest (one hot) categorical split. Also, JSON serialization format, gpu_predictor and pandas input are required.
max_bin (int) –
Bases: object
A Booster of XGBoost.
Booster is the model of xgboost, that contains low level routines for training, prediction and evaluation.
Get attribute string from the Booster.
Get attributes stored in the Booster as a dictionary.
result – Returns an empty dict if there’s no attributes.
dictionary of attribute_name: attribute_value pairs of strings.
Boost the booster for one iteration, with customized gradient
statistics. Like xgboost.Booster.update()
, this
function should not be called directly by users.
Copy the booster object.
booster – a copied booster model
Booster
Dump model into a text or JSON file. Unlike save_model, the output format is primarily used for visualization or interpretation, hence it’s more human readable but cannot be loaded back to XGBoost.
fout (string or os.PathLike) – Output file name.
fmap (string or os.PathLike, optional) – Name of the file containing feature map names.
with_stats (bool, optional) – Controls whether the split statistics are output.
dump_format (string, optional) – Format of model dump file. Can be ‘text’ or ‘json’.
Evaluate the model on mat.
Evaluate a set of data.
Feature names for this booster. Can be directly set by input data or by assignment.
Feature types for this booster. Can be directly set by input data or by assignment.
Returns the model dump as a list of strings. Unlike save_model, the output format is primarily used for visualization or interpretation, hence it’s more human readable but cannot be loaded back to XGBoost.
fmap (string or os.PathLike, optional) – Name of the file containing feature map names.
with_stats (bool, optional) – Controls whether the split statistics are output.
dump_format (string, optional) – Format of model dump. Can be ‘text’, ‘json’ or ‘dot’.
Get feature importance of each feature.
Note
Feature importance is defined only for tree boosters
Feature importance is only defined when the decision tree model is chosen as base learner (booster=gbtree). It is not defined for other base learner types, such as linear learners (booster=gblinear).
Note
Zero-importance features will not be included
Keep in mind that this function does not include zero-importance feature, i.e. those features that have not been used in any split conditions.
fmap (str or os.PathLike (optional)) – The name of feature map file
Get feature importance of each feature. Importance type can be defined as:
‘weight’: the number of times a feature is used to split the data across all trees.
‘gain’: the average gain across all splits the feature is used in.
‘cover’: the average coverage across all splits the feature is used in.
‘total_gain’: the total gain across all splits the feature is used in.
‘total_cover’: the total coverage across all splits the feature is used in.
Note
Feature importance is defined only for tree boosters
Feature importance is only defined when the decision tree model is chosen as base learner (booster=gbtree). It is not defined for other base learner types, such as linear learners (booster=gblinear).
fmap (str or os.PathLike (optional)) – The name of feature map file.
importance_type (str, default 'weight') – One of the importance types defined above.
Get split value histogram of a feature
feature (str) – The name of the feature.
fmap (str or os.PathLike (optional)) – The name of feature map file.
bin (int, default None) – The maximum number of bins. Number of bins equals number of unique split values n_unique, if bins == None or bins > n_unique.
as_pandas (bool, default True) – Return pd.DataFrame when pandas is installed. If False or pandas is not installed, return numpy ndarray.
a histogram of used splitting values for the specified feature
either as numpy array or pandas DataFrame.
Run prediction in-place, Unlike predict
method, inplace prediction does
not cache the prediction result.
Calling only inplace_predict
in multiple threads is safe and lock
free. But the safety does not hold when used in conjunction with other
methods. E.g. you can’t train the booster in one thread and perform
prediction in the other.
booster.set_param({'predictor': 'gpu_predictor'})
booster.inplace_predict(cupy_array)
booster.set_param({'predictor': 'cpu_predictor})
booster.inplace_predict(numpy_array)
New in version 1.1.0.
data (numpy.ndarray/scipy.sparse.csr_matrix/cupy.ndarray/) – cudf.DataFrame/pd.DataFrame
The input data, must not be a view for numpy array. Set
predictor
to gpu_predictor
for running prediction on CuPy
array or CuDF DataFrame.
iteration_range (Tuple[int, int]) – See xgboost.Booster.predict()
for details.
predict_type (str) –
value Output model prediction values.
margin Output the raw untransformed margin value.
missing (float) – See xgboost.DMatrix
for details.
validate_features (bool) – See xgboost.Booster.predict()
for details.
base_margin (Optional[Any]) –
See xgboost.DMatrix
for details.
New in version 1.4.0.
strict_shape (bool) –
See xgboost.Booster.predict()
for details.
New in version 1.4.0.
prediction – The prediction result. When input data is on GPU, prediction result is stored in a cupy array.
numpy.ndarray/cupy.ndarray
Load configuration returned by save_config.
New in version 1.0.0.
Load the model from a file or bytearray. Path to file can be local or as an URI.
The model is loaded from XGBoost format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) will not be loaded when using binary format. To save those attributes, use JSON instead. See:
for more info.
fname (string, os.PathLike, or a memory buffer) – Input file name or memory buffer(see also save_raw)
Get number of boosted rounds. For gblinear this is reset to 0 after serializing the model.
Predict with data.
Note
This function is not thread safe except for
gbtree
booster.When using booster other than
gbtree
, predict can only be called from one thread. If you want to run prediction using multiple thread, callxgboost.Booster.copy()
to make copies of model object and then callpredict()
.
data (xgboost.core.DMatrix) – The dmatrix storing the input.
output_margin (bool) – Whether to output the raw untransformed margin value.
ntree_limit (int) – Deprecated, use iteration_range instead.
pred_leaf (bool) – When this option is on, the output will be a matrix of (nsample, ntrees) with each record indicating the predicted leaf index of each sample in each tree. Note that the leaf index of a tree is unique per tree, so you may find leaf 1 in both tree 1 and tree 0.
pred_contribs (bool) – When this is True the output will be a matrix of size (nsample, nfeats + 1) with each record indicating the feature contributions (SHAP values) for that prediction. The sum of all feature contributions is equal to the raw untransformed margin value of the prediction. Note the final column is the bias term.
approx_contribs (bool) – Approximate the contributions of each feature. Used when pred_contribs
or
pred_interactions
is set to True. Changing the default of this parameter
(False) is not recommended.
pred_interactions (bool) – When this is True the output will be a matrix of size (nsample, nfeats + 1, nfeats + 1) indicating the SHAP interaction values for each pair of features. The sum of each row (or column) of the interaction values equals the corresponding SHAP value (from pred_contribs), and the sum of the entire matrix equals the raw untransformed margin value of the prediction. Note the last row and column correspond to the bias term.
validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.
training (bool) –
Whether the prediction value is used for training. This can effect dart booster, which performs dropouts during training iterations.
New in version 1.0.0.
iteration_range (Tuple[int, int]) –
Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.
New in version 1.4.0.
strict_shape (bool) –
When set to True, output shape is invariant to whether classification is used. For both value and margin prediction, the output shape is (n_samples, n_groups), n_groups == 1 when multi-class is not used. Default to False, in which case the output shape can be (n_samples, ) if multi-class is not used.
New in version 1.4.0.
note: (.) – Using predict()
with DART booster: If the booster object is DART type, predict()
will not perform
dropouts, i.e. all the trees will be evaluated. If you want to
obtain result with dropouts, provide training=True.
prediction
numpy array
Output internal parameter configuration of Booster as a JSON string.
New in version 1.0.0.
Save the model to a file.
The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature_names) will not be saved when using binary format. To save those attributes, use JSON instead. See:
for more info.
fname (string or os.PathLike) – Output file name
Save the model to a in memory buffer representation instead of file.
a in memory buffer representation of the model
Set the attribute of the Booster.
**kwargs – The attributes to set. Setting a value to None deletes an attribute.
Set parameters into the Booster.
params (dict/list/str) – list of key,value pairs, dict of key to value or simply str key
value (optional) – value of the specified parameter, when params is str key
Parse a boosted tree model text dump into a pandas DataFrame structure.
This feature is only defined when the decision tree model is chosen as base learner (booster in {gbtree, dart}). It is not defined for other base learner types, such as linear learners (booster=gblinear).
fmap (str or os.PathLike (optional)) – The name of feature map file.
Update for one iteration, with objective function calculated internally. This function should not be called directly by users.
Training Library containing training routines.
Train a booster with given parameters.
params (dict) – Booster params.
dtrain (DMatrix) – Data to be trained.
num_boost_round (int) – Number of boosting iterations.
evals (list of pairs (DMatrix, string)) – List of validation sets for which metrics will evaluated during training. Validation metrics will help us track the performance of the model.
obj (function) – Customized objective function.
feval (function) – Customized evaluation function.
maximize (bool) – Whether to maximize feval.
early_stopping_rounds (int) – Activates early stopping. Validation metric needs to improve at least once in
every early_stopping_rounds round(s) to continue training.
Requires at least one item in evals.
The method returns the model from the last iteration (not the best one). Use
custom callback or model slicing if the best model is desired.
If there’s more than one item in evals, the last entry will be used for early
stopping.
If there’s more than one metric in the eval_metric parameter given in
params, the last metric will be used for early stopping.
If early stopping occurs, the model will have three additional fields:
bst.best_score
, bst.best_iteration
and bst.best_ntree_limit
. Use
bst.best_ntree_limit
to get the correct value if num_parallel_tree
and/or
num_class
appears in the parameters. best_ntree_limit
is the result of
num_parallel_tree * best_iteration
.
evals_result (dict) –
This dictionary stores the evaluation results of all the items in watchlist.
Example: with a watchlist containing
[(dtest,'eval'), (dtrain,'train')]
and
a parameter containing ('eval_metric': 'logloss')
,
the evals_result returns
{'train': {'logloss': ['0.48253', '0.35953']},
'eval': {'logloss': ['0.480385', '0.357756']}}
verbose_eval (bool or int) – Requires at least one item in evals.
If verbose_eval is True then the evaluation metric on the validation set is
printed at each boosting stage.
If verbose_eval is an integer then the evaluation metric on the validation set
is printed at every given verbose_eval boosting stage. The last boosting stage
/ the boosting stage found by using early_stopping_rounds is also printed.
Example: with verbose_eval=4
and at least one item in evals, an evaluation metric
is printed every 4 boosting stages, instead of every boosting stage.
xgb_model (file name of stored xgb model or 'Booster' instance) – Xgb model to be loaded before training (allows training continuation).
callbacks (list of callback functions) –
List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API. Example:
[xgb.callback.LearningRateScheduler(custom_rates)]
Booster
a trained booster model
Cross-validation with given parameters.
params (dict) – Booster params.
dtrain (DMatrix) – Data to be trained.
num_boost_round (int) – Number of boosting iterations.
nfold (int) – Number of folds in CV.
stratified (bool) – Perform stratified sampling.
folds (a KFold or StratifiedKFold instance or list of fold indices) – Sklearn KFolds or StratifiedKFolds object.
Alternatively may explicitly pass sample indices for each fold.
For n
folds, folds should be a length n
list of tuples.
Each tuple is (in,out)
where in
is a list of indices to be used
as the training samples for the n
th fold and out
is a list of
indices to be used as the testing samples for the n
th fold.
metrics (string or list of strings) – Evaluation metrics to be watched in CV.
obj (function) – Custom objective function.
feval (function) – Custom evaluation function.
maximize (bool) – Whether to maximize feval.
early_stopping_rounds (int) – Activates early stopping. Cross-Validation metric (average of validation metric computed over CV folds) needs to improve at least once in every early_stopping_rounds round(s) to continue training. The last entry in the evaluation history will represent the best iteration. If there’s more than one metric in the eval_metric parameter given in params, the last metric will be used for early stopping.
fpreproc (function) – Preprocessing function that takes (dtrain, dtest, param) and returns transformed versions of those.
as_pandas (bool, default True) – Return pd.DataFrame when pandas is installed. If False or pandas is not installed, return np.ndarray
verbose_eval (bool, int, or None, default None) – Whether to display the progress. If None, progress will be displayed when np.ndarray is returned. If True, progress will be displayed at boosting stage. If an integer is given, progress will be displayed at every given verbose_eval boosting stage.
show_stdv (bool, default True) – Whether to display the standard deviation in progress. Results are not affected, and always contains std.
seed (int) – Seed used to generate the folds (passed to numpy.random.seed).
callbacks (list of callback functions) –
List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API. Example:
[xgb.callback.LearningRateScheduler(custom_rates)]
shuffle (bool) – Shuffle data before creating folds.
evaluation history
list(string)
Scikit-Learn Wrapper interface for XGBoost.
Bases: xgboost.sklearn.XGBModel
, object
Implementation of the scikit-learn API for XGBoost regression.
n_estimators (int) – Number of gradient boosted trees. Equivalent to number of boosting rounds.
max_depth (int) – Maximum tree depth for base learners.
learning_rate (float) – Boosting learning rate (xgb’s “eta”)
verbosity (int) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).
objective (string or callable) – Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below).
booster (string) – Specify which booster to use: gbtree, gblinear or dart.
tree_method (string) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from parameters document.
n_jobs (int) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.
gamma (float) – Minimum loss reduction required to make a further partition on a leaf node of the tree.
min_child_weight (float) – Minimum sum of instance weight(hessian) needed in a child.
max_delta_step (float) – Maximum delta step we allow each tree’s weight estimation to be.
subsample (float) – Subsample ratio of the training instance.
colsample_bytree (float) – Subsample ratio of columns when constructing each tree.
colsample_bylevel (float) – Subsample ratio of columns for each level.
colsample_bynode (float) – Subsample ratio of columns for each split.
reg_alpha (float (xgb's alpha)) – L1 regularization term on weights
reg_lambda (float (xgb's lambda)) – L2 regularization term on weights
scale_pos_weight (float) – Balancing of positive and negative weights.
base_score – The initial prediction score of all instances, global bias.
random_state (int) –
Random number seed.
Note
Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.
missing (float, default np.nan) – Value in the data which needs to be present as a missing value.
num_parallel_tree (int) – Used for boosting random forest.
monotone_constraints (str) – Constraint of variable monotonicity. See tutorial for more information.
interaction_constraints (str) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nest list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information
importance_type (string, default "gain") – The feature importance type for the feature_importances_ property: either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.
gpu_id – Device ordinal.
validate_parameters – Give warnings for unknown parameter.
**kwargs (dict, optional) –
Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here: https://github.com/dmlc/xgboost/blob/master/doc/parameter.rst. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.
Note
**kwargs unsupported by scikit-learn
**kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn.
Note
Custom objective function
A custom objective function can be provided for the objective
parameter. In this case, it should have the signature
objective(y_true, y_pred) -> grad, hess
:
The target values
The predicted values
The value of the gradient for each sample point.
The value of the second derivative for each sample point
Return the predicted leaf every tree for each sample.
X_leaves – For each datapoint x in X and for each tree, return the index of the
leaf x ends up in. Leaves are numbered within
[0; 2**(self.max_depth+1))
, possibly with gaps in the numbering.
array_like, shape=[n_samples, n_trees]
Coefficients property
Note
Coefficients are defined only for linear learners
Coefficients are only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
coef_
array of shape [n_features]
or [n_classes, n_features]
Return the evaluation results.
If eval_set is passed to the fit function, you can call
evals_result()
to get evaluation results for all passed eval_sets.
When eval_metric is also passed to the fit function, the
evals_result will contain the eval_metrics passed to the fit function.
evals_result
dictionary
Example
param_dist = {'objective':'binary:logistic', 'n_estimators':2}
clf = xgb.XGBModel(**param_dist)
clf.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_test, y_test)],
eval_metric='logloss',
verbose=True)
evals_result = clf.evals_result()
The variable evals_result will contain:
{'validation_0': {'logloss': ['0.604835', '0.531479']},
'validation_1': {'logloss': ['0.41965', '0.17686']}}
Feature importances property
Note
Feature importance is defined only for tree boosters
Feature importance is only defined when the decision tree model is chosen as base learner (booster=gbtree). It is not defined for other base learner types, such as linear learners (booster=gblinear).
feature_importances_
array of shape [n_features]
Fit gradient boosting model.
Note that calling fit()
multiple times will cause the model object to be
re-fit from scratch. To resume training from a previous checkpoint, explicitly
pass xgb_model
argument.
X (array_like) – Feature matrix
y (array_like) – Labels
sample_weight (array_like) – instance weights
base_margin (array_like) – global bias for each instance.
eval_set (list, optional) – A list of (X, y) tuple pairs to use as validation sets, for which metrics will be computed. Validation metrics will help us track the performance of the model.
eval_metric (str, list of str, or callable, optional) – If a str, should be a built-in evaluation metric to use. See
doc/parameter.rst.
If a list of str, should be the list of multiple built-in evaluation metrics
to use.
If callable, a custom evaluation metric. The call signature is
func(y_predicted, y_true)
where y_true
will be a DMatrix object such
that you may need to call the get_label
method. It must return a str,
value pair where the str is a name for the evaluation and value is the value
of the evaluation function. The callable custom objective is always minimized.
early_stopping_rounds (int) – Activates early stopping. Validation metric needs to improve at least once in
every early_stopping_rounds round(s) to continue training.
Requires at least one item in eval_set.
The method returns the model from the last iteration (not the best one).
If there’s more than one item in eval_set, the last entry will be used
for early stopping.
If there’s more than one metric in eval_metric, the last metric will be
used for early stopping.
If early stopping occurs, the model will have three additional fields:
clf.best_score
, clf.best_iteration
and clf.best_ntree_limit
.
verbose (bool) – If verbose and an evaluation set is used, writes the evaluation metric measured on the validation set to stderr.
xgb_model (Optional[Union[xgboost.core.Booster, str, xgboost.sklearn.XGBModel]]) – file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).
sample_weight_eval_set (list, optional) – A list of the form [L_1, L_2, …, L_n], where each L_i is an array like object storing instance weights for the i-th validation set.
base_margin_eval_set (list, optional) – A list of the form [M_1, M_2, …, M_n], where each M_i is an array like object storing base margin for the i-th validation set.
feature_weights (array_like) – Weight for each feature, defines the probability of each feature being selected when colsample is being used. All values must be greater than 0, otherwise a ValueError is thrown. Only available for hist, gpu_hist and exact tree methods.
callbacks (list of callback functions) –
List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API. Example:
callbacks = [xgb.callback.EarlyStopping(rounds=early_stopping_rounds,
save_best=True)]
Get the underlying xgboost Booster of this model.
This will raise an exception when fit was not called
booster
a xgboost booster of underlying model
Gets the number of xgboost boosting rounds.
Get parameters.
Get xgboost specific parameters.
Intercept (bias) property
Note
Intercept is defined only for linear learners
Intercept (bias) is only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
intercept_
array of shape (1,)
or [n_classes]
Load the model from a file.
The model is loaded from an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be loaded.
fname (string) – Input file name.
Predict with X.
Note
This function is only thread safe for gbtree and dart.
X (array_like) – Data to predict with
output_margin (bool) – Whether to output the raw untransformed margin value.
ntree_limit (int) – Deprecated, use iteration_range instead.
validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.
base_margin (array_like) – Margin added to prediction.
iteration_range –
Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.
New in version 1.4.0.
prediction
numpy array
Save the model to a file.
The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be saved.
fname (string) – Output file name
Set the parameters of this estimator. Modification of the sklearn method to allow unknown kwargs. This allows using the full range of xgboost parameters that are not defined as member variables in sklearn grid search.
self
Bases: xgboost.sklearn.XGBModel
, object
Implementation of the scikit-learn API for XGBoost classification.
n_estimators (int) – Number of boosting rounds.
use_label_encoder (bool) – (Deprecated) Use the label encoder from scikit-learn to encode the labels. For new code, we recommend that you set this parameter to False.
max_depth (int) – Maximum tree depth for base learners.
learning_rate (float) – Boosting learning rate (xgb’s “eta”)
verbosity (int) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).
objective (string or callable) – Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below).
booster (string) – Specify which booster to use: gbtree, gblinear or dart.
tree_method (string) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from parameters document.
n_jobs (int) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.
gamma (float) – Minimum loss reduction required to make a further partition on a leaf node of the tree.
min_child_weight (float) – Minimum sum of instance weight(hessian) needed in a child.
max_delta_step (float) – Maximum delta step we allow each tree’s weight estimation to be.
subsample (float) – Subsample ratio of the training instance.
colsample_bytree (float) – Subsample ratio of columns when constructing each tree.
colsample_bylevel (float) – Subsample ratio of columns for each level.
colsample_bynode (float) – Subsample ratio of columns for each split.
reg_alpha (float (xgb's alpha)) – L1 regularization term on weights
reg_lambda (float (xgb's lambda)) – L2 regularization term on weights
scale_pos_weight (float) – Balancing of positive and negative weights.
base_score – The initial prediction score of all instances, global bias.
random_state (int) –
Random number seed.
Note
Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.
missing (float, default np.nan) – Value in the data which needs to be present as a missing value.
num_parallel_tree (int) – Used for boosting random forest.
monotone_constraints (str) – Constraint of variable monotonicity. See tutorial for more information.
interaction_constraints (str) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nest list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information
importance_type (string, default "gain") – The feature importance type for the feature_importances_ property: either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.
gpu_id – Device ordinal.
validate_parameters – Give warnings for unknown parameter.
**kwargs (dict, optional) –
Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here: https://github.com/dmlc/xgboost/blob/master/doc/parameter.rst. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.
Note
**kwargs unsupported by scikit-learn
**kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn.
Note
Custom objective function
A custom objective function can be provided for the objective
parameter. In this case, it should have the signature
objective(y_true, y_pred) -> grad, hess
:
The target values
The predicted values
The value of the gradient for each sample point.
The value of the second derivative for each sample point
Return the predicted leaf every tree for each sample.
X_leaves – For each datapoint x in X and for each tree, return the index of the
leaf x ends up in. Leaves are numbered within
[0; 2**(self.max_depth+1))
, possibly with gaps in the numbering.
array_like, shape=[n_samples, n_trees]
Coefficients property
Note
Coefficients are defined only for linear learners
Coefficients are only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
coef_
array of shape [n_features]
or [n_classes, n_features]
Return the evaluation results.
If eval_set is passed to the fit function, you can call
evals_result()
to get evaluation results for all passed eval_sets.
When eval_metric is also passed to the fit function, the
evals_result will contain the eval_metrics passed to the fit function.
evals_result
dictionary
Example
param_dist = {'objective':'binary:logistic', 'n_estimators':2}
clf = xgb.XGBClassifier(**param_dist)
clf.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_test, y_test)],
eval_metric='logloss',
verbose=True)
evals_result = clf.evals_result()
The variable evals_result will contain
{'validation_0': {'logloss': ['0.604835', '0.531479']},
'validation_1': {'logloss': ['0.41965', '0.17686']}}
Feature importances property
Note
Feature importance is defined only for tree boosters
Feature importance is only defined when the decision tree model is chosen as base learner (booster=gbtree). It is not defined for other base learner types, such as linear learners (booster=gblinear).
feature_importances_
array of shape [n_features]
Fit gradient boosting classifier.
Note that calling fit()
multiple times will cause the model object to be
re-fit from scratch. To resume training from a previous checkpoint, explicitly
pass xgb_model
argument.
X (array_like) – Feature matrix
y (array_like) – Labels
sample_weight (array_like) – instance weights
base_margin (array_like) – global bias for each instance.
eval_set (list, optional) – A list of (X, y) tuple pairs to use as validation sets, for which metrics will be computed. Validation metrics will help us track the performance of the model.
eval_metric (str, list of str, or callable, optional) – If a str, should be a built-in evaluation metric to use. See
doc/parameter.rst.
If a list of str, should be the list of multiple built-in evaluation metrics
to use.
If callable, a custom evaluation metric. The call signature is
func(y_predicted, y_true)
where y_true
will be a DMatrix object such
that you may need to call the get_label
method. It must return a str,
value pair where the str is a name for the evaluation and value is the value
of the evaluation function. The callable custom objective is always minimized.
early_stopping_rounds (int) – Activates early stopping. Validation metric needs to improve at least once in
every early_stopping_rounds round(s) to continue training.
Requires at least one item in eval_set.
The method returns the model from the last iteration (not the best one).
If there’s more than one item in eval_set, the last entry will be used
for early stopping.
If there’s more than one metric in eval_metric, the last metric will be
used for early stopping.
If early stopping occurs, the model will have three additional fields:
clf.best_score
, clf.best_iteration
and clf.best_ntree_limit
.
verbose (bool) – If verbose and an evaluation set is used, writes the evaluation metric measured on the validation set to stderr.
xgb_model – file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).
sample_weight_eval_set (list, optional) – A list of the form [L_1, L_2, …, L_n], where each L_i is an array like object storing instance weights for the i-th validation set.
base_margin_eval_set (list, optional) – A list of the form [M_1, M_2, …, M_n], where each M_i is an array like object storing base margin for the i-th validation set.
feature_weights (array_like) – Weight for each feature, defines the probability of each feature being selected when colsample is being used. All values must be greater than 0, otherwise a ValueError is thrown. Only available for hist, gpu_hist and exact tree methods.
callbacks (list of callback functions) –
List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API. Example:
callbacks = [xgb.callback.EarlyStopping(rounds=early_stopping_rounds,
save_best=True)]
Get the underlying xgboost Booster of this model.
This will raise an exception when fit was not called
booster
a xgboost booster of underlying model
Gets the number of xgboost boosting rounds.
Get parameters.
Get xgboost specific parameters.
Intercept (bias) property
Note
Intercept is defined only for linear learners
Intercept (bias) is only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
intercept_
array of shape (1,)
or [n_classes]
Load the model from a file.
The model is loaded from an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be loaded.
fname (string) – Input file name.
Predict with X.
Note
This function is only thread safe for gbtree and dart.
X (array_like) – Data to predict with
output_margin (bool) – Whether to output the raw untransformed margin value.
ntree_limit (int) – Deprecated, use iteration_range instead.
validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.
base_margin (array_like) – Margin added to prediction.
iteration_range (Optional[Tuple[int, int]]) –
Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.
New in version 1.4.0.
prediction
numpy array
Predict the probability of each X example being of a given class.
Note
This function is only thread safe for gbtree and dart.
X (array_like) – Feature matrix.
ntree_limit (int) – Deprecated, use iteration_range instead.
validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.
base_margin (array_like) – Margin added to prediction.
iteration_range (Optional[Tuple[int, int]]) – Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.
prediction – a numpy array of shape array-like of shape (n_samples, n_classes) with the probability of each data example being of a given class.
numpy array
Save the model to a file.
The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be saved.
fname (string) – Output file name
Set the parameters of this estimator. Modification of the sklearn method to allow unknown kwargs. This allows using the full range of xgboost parameters that are not defined as member variables in sklearn grid search.
self
Bases: xgboost.sklearn.XGBModel
, xgboost.sklearn.XGBRankerMixIn
Implementation of the Scikit-Learn API for XGBoost Ranking.
n_estimators (int) – Number of gradient boosted trees. Equivalent to number of boosting rounds.
max_depth (int) – Maximum tree depth for base learners.
learning_rate (float) – Boosting learning rate (xgb’s “eta”)
verbosity (int) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).
objective (string or callable) – Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below).
booster (string) – Specify which booster to use: gbtree, gblinear or dart.
tree_method (string) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from parameters document.
n_jobs (int) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.
gamma (float) – Minimum loss reduction required to make a further partition on a leaf node of the tree.
min_child_weight (float) – Minimum sum of instance weight(hessian) needed in a child.
max_delta_step (float) – Maximum delta step we allow each tree’s weight estimation to be.
subsample (float) – Subsample ratio of the training instance.
colsample_bytree (float) – Subsample ratio of columns when constructing each tree.
colsample_bylevel (float) – Subsample ratio of columns for each level.
colsample_bynode (float) – Subsample ratio of columns for each split.
reg_alpha (float (xgb's alpha)) – L1 regularization term on weights
reg_lambda (float (xgb's lambda)) – L2 regularization term on weights
scale_pos_weight (float) – Balancing of positive and negative weights.
base_score – The initial prediction score of all instances, global bias.
random_state (int) –
Random number seed.
Note
Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.
missing (float, default np.nan) – Value in the data which needs to be present as a missing value.
num_parallel_tree (int) – Used for boosting random forest.
monotone_constraints (str) – Constraint of variable monotonicity. See tutorial for more information.
interaction_constraints (str) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nest list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information
importance_type (string, default "gain") – The feature importance type for the feature_importances_ property: either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.
gpu_id – Device ordinal.
validate_parameters – Give warnings for unknown parameter.
**kwargs (dict, optional) –
Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here: https://github.com/dmlc/xgboost/blob/master/doc/parameter.rst. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.
Note
**kwargs unsupported by scikit-learn
**kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn.
Note
A custom objective function is currently not supported by XGBRanker. Likewise, a custom metric function is not supported either.
Note
Query group information is required for ranking tasks by either using the group parameter or qid parameter in fit method.
Before fitting the model, your data need to be sorted by query group. When fitting the model, you need to provide an additional array that contains the size of each query group.
For example, if your original data look like:
qid |
label |
features |
1 |
0 |
x_1 |
1 |
1 |
x_2 |
1 |
0 |
x_3 |
2 |
0 |
x_4 |
2 |
1 |
x_5 |
2 |
1 |
x_6 |
2 |
1 |
x_7 |
then your group array should be [3, 4]
. Sometimes using query id (qid)
instead of group can be more convenient.
Return the predicted leaf every tree for each sample.
X_leaves – For each datapoint x in X and for each tree, return the index of the
leaf x ends up in. Leaves are numbered within
[0; 2**(self.max_depth+1))
, possibly with gaps in the numbering.
array_like, shape=[n_samples, n_trees]
Coefficients property
Note
Coefficients are defined only for linear learners
Coefficients are only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
coef_
array of shape [n_features]
or [n_classes, n_features]
Return the evaluation results.
If eval_set is passed to the fit function, you can call
evals_result()
to get evaluation results for all passed eval_sets.
When eval_metric is also passed to the fit function, the
evals_result will contain the eval_metrics passed to the fit function.
evals_result
dictionary
Example
param_dist = {'objective':'binary:logistic', 'n_estimators':2}
clf = xgb.XGBModel(**param_dist)
clf.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_test, y_test)],
eval_metric='logloss',
verbose=True)
evals_result = clf.evals_result()
The variable evals_result will contain:
{'validation_0': {'logloss': ['0.604835', '0.531479']},
'validation_1': {'logloss': ['0.41965', '0.17686']}}
Feature importances property
Note
Feature importance is defined only for tree boosters
Feature importance is only defined when the decision tree model is chosen as base learner (booster=gbtree). It is not defined for other base learner types, such as linear learners (booster=gblinear).
feature_importances_
array of shape [n_features]
Fit gradient boosting ranker
Note that calling fit()
multiple times will cause the model object to be
re-fit from scratch. To resume training from a previous checkpoint, explicitly
pass xgb_model
argument.
X (array_like) – Feature matrix
y (array_like) – Labels
group (array_like) – Size of each query group of training data. Should have as many elements as the query groups in the training data. If this is set to None, then user must provide qid.
qid (array_like) – Query ID for each training sample. Should have the size of n_samples. If this is set to None, then user must provide group.
sample_weight (array_like) –
Query group weights
Note
Weights are per-group for ranking tasks
In ranking task, one weight is assigned to each query group/id (not each data point). This is because we only care about the relative ordering of data points within each group, so it doesn’t make sense to assign weights to individual data points.
base_margin (array_like) – Global bias for each instance.
eval_set (list, optional) – A list of (X, y) tuple pairs to use as validation sets, for which metrics will be computed. Validation metrics will help us track the performance of the model.
eval_group (list of arrays, optional) – A list in which eval_group[i]
is the list containing the sizes of all
query groups in the i
-th pair in eval_set.
eval_qid (list of array_like, optional) – A list in which eval_qid[i]
is the array containing query ID of i
-th
pair in eval_set.
eval_metric (str, list of str, optional) – If a str, should be a built-in evaluation metric to use. See doc/parameter.rst. If a list of str, should be the list of multiple built-in evaluation metrics to use. The custom evaluation metric is not yet supported for the ranker.
early_stopping_rounds (int) – Activates early stopping. Validation metric needs to improve at least once in
every early_stopping_rounds round(s) to continue training. Requires at
least one item in eval_set.
The method returns the model from the last iteration (not the best one). If
there’s more than one item in eval_set, the last entry will be used for
early stopping.
If there’s more than one metric in eval_metric, the last metric will be
used for early stopping.
If early stopping occurs, the model will have three additional fields:
clf.best_score
, clf.best_iteration
and clf.best_ntree_limit
.
verbose (bool) – If verbose and an evaluation set is used, writes the evaluation metric measured on the validation set to stderr.
xgb_model (Optional[Union[xgboost.core.Booster, str, xgboost.sklearn.XGBModel]]) – file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).
sample_weight_eval_set (list, optional) –
A list of the form [L_1, L_2, …, L_n], where each L_i is a list of group weights on the i-th validation set.
Note
Weights are per-group for ranking tasks
In ranking task, one weight is assigned to each query group (not each data point). This is because we only care about the relative ordering of data points within each group, so it doesn’t make sense to assign weights to individual data points.
base_margin_eval_set (list, optional) – A list of the form [M_1, M_2, …, M_n], where each M_i is an array like object storing base margin for the i-th validation set.
feature_weights (array_like) – Weight for each feature, defines the probability of each feature being selected when colsample is being used. All values must be greater than 0, otherwise a ValueError is thrown. Only available for hist, gpu_hist and exact tree methods.
callbacks (list of callback functions) –
List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API. Example:
callbacks = [xgb.callback.EarlyStopping(rounds=early_stopping_rounds,
save_best=True)]
Get the underlying xgboost Booster of this model.
This will raise an exception when fit was not called
booster
a xgboost booster of underlying model
Gets the number of xgboost boosting rounds.
Get parameters.
Get xgboost specific parameters.
Intercept (bias) property
Note
Intercept is defined only for linear learners
Intercept (bias) is only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
intercept_
array of shape (1,)
or [n_classes]
Load the model from a file.
The model is loaded from an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be loaded.
fname (string) – Input file name.
Predict with X.
Note
This function is only thread safe for gbtree and dart.
X (array_like) – Data to predict with
output_margin (bool) – Whether to output the raw untransformed margin value.
ntree_limit (int) – Deprecated, use iteration_range instead.
validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.
base_margin (array_like) – Margin added to prediction.
iteration_range –
Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.
New in version 1.4.0.
prediction
numpy array
Save the model to a file.
The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be saved.
fname (string) – Output file name
Set the parameters of this estimator. Modification of the sklearn method to allow unknown kwargs. This allows using the full range of xgboost parameters that are not defined as member variables in sklearn grid search.
self
Bases: xgboost.sklearn.XGBRegressor
scikit-learn API for XGBoost random forest regression.
n_estimators (int) – Number of trees in random forest to fit.
max_depth (int) – Maximum tree depth for base learners.
learning_rate (float) – Boosting learning rate (xgb’s “eta”)
verbosity (int) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).
objective (string or callable) – Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below).
booster (string) – Specify which booster to use: gbtree, gblinear or dart.
tree_method (string) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from parameters document.
n_jobs (int) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.
gamma (float) – Minimum loss reduction required to make a further partition on a leaf node of the tree.
min_child_weight (float) – Minimum sum of instance weight(hessian) needed in a child.
max_delta_step (float) – Maximum delta step we allow each tree’s weight estimation to be.
subsample (float) – Subsample ratio of the training instance.
colsample_bytree (float) – Subsample ratio of columns when constructing each tree.
colsample_bylevel (float) – Subsample ratio of columns for each level.
colsample_bynode (float) – Subsample ratio of columns for each split.
reg_alpha (float (xgb's alpha)) – L1 regularization term on weights
reg_lambda (float (xgb's lambda)) – L2 regularization term on weights
scale_pos_weight (float) – Balancing of positive and negative weights.
base_score – The initial prediction score of all instances, global bias.
random_state (int) –
Random number seed.
Note
Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.
missing (float, default np.nan) – Value in the data which needs to be present as a missing value.
num_parallel_tree (int) – Used for boosting random forest.
monotone_constraints (str) – Constraint of variable monotonicity. See tutorial for more information.
interaction_constraints (str) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nest list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information
importance_type (string, default "gain") – The feature importance type for the feature_importances_ property: either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.
gpu_id – Device ordinal.
validate_parameters – Give warnings for unknown parameter.
**kwargs (dict, optional) –
Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here: https://github.com/dmlc/xgboost/blob/master/doc/parameter.rst. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.
Note
**kwargs unsupported by scikit-learn
**kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn.
Note
Custom objective function
A custom objective function can be provided for the objective
parameter. In this case, it should have the signature
objective(y_true, y_pred) -> grad, hess
:
The target values
The predicted values
The value of the gradient for each sample point.
The value of the second derivative for each sample point
Return the predicted leaf every tree for each sample.
X_leaves – For each datapoint x in X and for each tree, return the index of the
leaf x ends up in. Leaves are numbered within
[0; 2**(self.max_depth+1))
, possibly with gaps in the numbering.
array_like, shape=[n_samples, n_trees]
Coefficients property
Note
Coefficients are defined only for linear learners
Coefficients are only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
coef_
array of shape [n_features]
or [n_classes, n_features]
Return the evaluation results.
If eval_set is passed to the fit function, you can call
evals_result()
to get evaluation results for all passed eval_sets.
When eval_metric is also passed to the fit function, the
evals_result will contain the eval_metrics passed to the fit function.
evals_result
dictionary
Example
param_dist = {'objective':'binary:logistic', 'n_estimators':2}
clf = xgb.XGBModel(**param_dist)
clf.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_test, y_test)],
eval_metric='logloss',
verbose=True)
evals_result = clf.evals_result()
The variable evals_result will contain:
{'validation_0': {'logloss': ['0.604835', '0.531479']},
'validation_1': {'logloss': ['0.41965', '0.17686']}}
Feature importances property
Note
Feature importance is defined only for tree boosters
Feature importance is only defined when the decision tree model is chosen as base learner (booster=gbtree). It is not defined for other base learner types, such as linear learners (booster=gblinear).
feature_importances_
array of shape [n_features]
Fit gradient boosting model.
Note that calling fit()
multiple times will cause the model object to be
re-fit from scratch. To resume training from a previous checkpoint, explicitly
pass xgb_model
argument.
X (array_like) – Feature matrix
y (array_like) – Labels
sample_weight (array_like) – instance weights
base_margin (array_like) – global bias for each instance.
eval_set (list, optional) – A list of (X, y) tuple pairs to use as validation sets, for which metrics will be computed. Validation metrics will help us track the performance of the model.
eval_metric (str, list of str, or callable, optional) – If a str, should be a built-in evaluation metric to use. See
doc/parameter.rst.
If a list of str, should be the list of multiple built-in evaluation metrics
to use.
If callable, a custom evaluation metric. The call signature is
func(y_predicted, y_true)
where y_true
will be a DMatrix object such
that you may need to call the get_label
method. It must return a str,
value pair where the str is a name for the evaluation and value is the value
of the evaluation function. The callable custom objective is always minimized.
early_stopping_rounds (int) – Activates early stopping. Validation metric needs to improve at least once in
every early_stopping_rounds round(s) to continue training.
Requires at least one item in eval_set.
The method returns the model from the last iteration (not the best one).
If there’s more than one item in eval_set, the last entry will be used
for early stopping.
If there’s more than one metric in eval_metric, the last metric will be
used for early stopping.
If early stopping occurs, the model will have three additional fields:
clf.best_score
, clf.best_iteration
and clf.best_ntree_limit
.
verbose (bool) – If verbose and an evaluation set is used, writes the evaluation metric measured on the validation set to stderr.
xgb_model (Optional[Union[xgboost.core.Booster, str, xgboost.sklearn.XGBModel]]) – file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).
sample_weight_eval_set (list, optional) – A list of the form [L_1, L_2, …, L_n], where each L_i is an array like object storing instance weights for the i-th validation set.
base_margin_eval_set (list, optional) – A list of the form [M_1, M_2, …, M_n], where each M_i is an array like object storing base margin for the i-th validation set.
feature_weights (array_like) – Weight for each feature, defines the probability of each feature being selected when colsample is being used. All values must be greater than 0, otherwise a ValueError is thrown. Only available for hist, gpu_hist and exact tree methods.
callbacks (list of callback functions) –
List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API. Example:
callbacks = [xgb.callback.EarlyStopping(rounds=early_stopping_rounds,
save_best=True)]
Get the underlying xgboost Booster of this model.
This will raise an exception when fit was not called
booster
a xgboost booster of underlying model
Gets the number of xgboost boosting rounds.
Get parameters.
Get xgboost specific parameters.
Intercept (bias) property
Note
Intercept is defined only for linear learners
Intercept (bias) is only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
intercept_
array of shape (1,)
or [n_classes]
Load the model from a file.
The model is loaded from an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be loaded.
fname (string) – Input file name.
Predict with X.
Note
This function is only thread safe for gbtree and dart.
X (array_like) – Data to predict with
output_margin (bool) – Whether to output the raw untransformed margin value.
ntree_limit (int) – Deprecated, use iteration_range instead.
validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.
base_margin (array_like) – Margin added to prediction.
iteration_range –
Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.
New in version 1.4.0.
prediction
numpy array
Save the model to a file.
The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be saved.
fname (string) – Output file name
Set the parameters of this estimator. Modification of the sklearn method to allow unknown kwargs. This allows using the full range of xgboost parameters that are not defined as member variables in sklearn grid search.
self
Bases: xgboost.sklearn.XGBClassifier
scikit-learn API for XGBoost random forest classification.
n_estimators (int) – Number of trees in random forest to fit.
use_label_encoder (bool) – (Deprecated) Use the label encoder from scikit-learn to encode the labels. For new code, we recommend that you set this parameter to False.
max_depth (int) – Maximum tree depth for base learners.
learning_rate (float) – Boosting learning rate (xgb’s “eta”)
verbosity (int) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).
objective (string or callable) – Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below).
booster (string) – Specify which booster to use: gbtree, gblinear or dart.
tree_method (string) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from parameters document.
n_jobs (int) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.
gamma (float) – Minimum loss reduction required to make a further partition on a leaf node of the tree.
min_child_weight (float) – Minimum sum of instance weight(hessian) needed in a child.
max_delta_step (float) – Maximum delta step we allow each tree’s weight estimation to be.
subsample (float) – Subsample ratio of the training instance.
colsample_bytree (float) – Subsample ratio of columns when constructing each tree.
colsample_bylevel (float) – Subsample ratio of columns for each level.
colsample_bynode (float) – Subsample ratio of columns for each split.
reg_alpha (float (xgb's alpha)) – L1 regularization term on weights
reg_lambda (float (xgb's lambda)) – L2 regularization term on weights
scale_pos_weight (float) – Balancing of positive and negative weights.
base_score – The initial prediction score of all instances, global bias.
random_state (int) –
Random number seed.
Note
Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.
missing (float, default np.nan) – Value in the data which needs to be present as a missing value.
num_parallel_tree (int) – Used for boosting random forest.
monotone_constraints (str) – Constraint of variable monotonicity. See tutorial for more information.
interaction_constraints (str) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nest list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information
importance_type (string, default "gain") – The feature importance type for the feature_importances_ property: either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.
gpu_id – Device ordinal.
validate_parameters – Give warnings for unknown parameter.
**kwargs (dict, optional) –
Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here: https://github.com/dmlc/xgboost/blob/master/doc/parameter.rst. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.
Note
**kwargs unsupported by scikit-learn
**kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn.
Note
Custom objective function
A custom objective function can be provided for the objective
parameter. In this case, it should have the signature
objective(y_true, y_pred) -> grad, hess
:
The target values
The predicted values
The value of the gradient for each sample point.
The value of the second derivative for each sample point
Return the predicted leaf every tree for each sample.
X_leaves – For each datapoint x in X and for each tree, return the index of the
leaf x ends up in. Leaves are numbered within
[0; 2**(self.max_depth+1))
, possibly with gaps in the numbering.
array_like, shape=[n_samples, n_trees]
Coefficients property
Note
Coefficients are defined only for linear learners
Coefficients are only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
coef_
array of shape [n_features]
or [n_classes, n_features]
Return the evaluation results.
If eval_set is passed to the fit function, you can call
evals_result()
to get evaluation results for all passed eval_sets.
When eval_metric is also passed to the fit function, the
evals_result will contain the eval_metrics passed to the fit function.
evals_result
dictionary
Example
param_dist = {'objective':'binary:logistic', 'n_estimators':2}
clf = xgb.XGBClassifier(**param_dist)
clf.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_test, y_test)],
eval_metric='logloss',
verbose=True)
evals_result = clf.evals_result()
The variable evals_result will contain
{'validation_0': {'logloss': ['0.604835', '0.531479']},
'validation_1': {'logloss': ['0.41965', '0.17686']}}
Feature importances property
Note
Feature importance is defined only for tree boosters
Feature importance is only defined when the decision tree model is chosen as base learner (booster=gbtree). It is not defined for other base learner types, such as linear learners (booster=gblinear).
feature_importances_
array of shape [n_features]
Fit gradient boosting classifier.
Note that calling fit()
multiple times will cause the model object to be
re-fit from scratch. To resume training from a previous checkpoint, explicitly
pass xgb_model
argument.
X (array_like) – Feature matrix
y (array_like) – Labels
sample_weight (array_like) – instance weights
base_margin (array_like) – global bias for each instance.
eval_set (list, optional) – A list of (X, y) tuple pairs to use as validation sets, for which metrics will be computed. Validation metrics will help us track the performance of the model.
eval_metric (str, list of str, or callable, optional) – If a str, should be a built-in evaluation metric to use. See
doc/parameter.rst.
If a list of str, should be the list of multiple built-in evaluation metrics
to use.
If callable, a custom evaluation metric. The call signature is
func(y_predicted, y_true)
where y_true
will be a DMatrix object such
that you may need to call the get_label
method. It must return a str,
value pair where the str is a name for the evaluation and value is the value
of the evaluation function. The callable custom objective is always minimized.
early_stopping_rounds (int) – Activates early stopping. Validation metric needs to improve at least once in
every early_stopping_rounds round(s) to continue training.
Requires at least one item in eval_set.
The method returns the model from the last iteration (not the best one).
If there’s more than one item in eval_set, the last entry will be used
for early stopping.
If there’s more than one metric in eval_metric, the last metric will be
used for early stopping.
If early stopping occurs, the model will have three additional fields:
clf.best_score
, clf.best_iteration
and clf.best_ntree_limit
.
verbose (bool) – If verbose and an evaluation set is used, writes the evaluation metric measured on the validation set to stderr.
xgb_model – file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).
sample_weight_eval_set (list, optional) – A list of the form [L_1, L_2, …, L_n], where each L_i is an array like object storing instance weights for the i-th validation set.
base_margin_eval_set (list, optional) – A list of the form [M_1, M_2, …, M_n], where each M_i is an array like object storing base margin for the i-th validation set.
feature_weights (array_like) – Weight for each feature, defines the probability of each feature being selected when colsample is being used. All values must be greater than 0, otherwise a ValueError is thrown. Only available for hist, gpu_hist and exact tree methods.
callbacks (list of callback functions) –
List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API. Example:
callbacks = [xgb.callback.EarlyStopping(rounds=early_stopping_rounds,
save_best=True)]
Get the underlying xgboost Booster of this model.
This will raise an exception when fit was not called
booster
a xgboost booster of underlying model
Gets the number of xgboost boosting rounds.
Get parameters.
Get xgboost specific parameters.
Intercept (bias) property
Note
Intercept is defined only for linear learners
Intercept (bias) is only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
intercept_
array of shape (1,)
or [n_classes]
Load the model from a file.
The model is loaded from an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be loaded.
fname (string) – Input file name.
Predict with X.
Note
This function is only thread safe for gbtree and dart.
X (array_like) – Data to predict with
output_margin (bool) – Whether to output the raw untransformed margin value.
ntree_limit (int) – Deprecated, use iteration_range instead.
validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.
base_margin (array_like) – Margin added to prediction.
iteration_range (Optional[Tuple[int, int]]) –
Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.
New in version 1.4.0.
prediction
numpy array
Predict the probability of each X example being of a given class.
Note
This function is only thread safe for gbtree and dart.
X (array_like) – Feature matrix.
ntree_limit (int) – Deprecated, use iteration_range instead.
validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.
base_margin (array_like) – Margin added to prediction.
iteration_range (Optional[Tuple[int, int]]) – Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.
prediction – a numpy array of shape array-like of shape (n_samples, n_classes) with the probability of each data example being of a given class.
numpy array
Save the model to a file.
The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be saved.
fname (string) – Output file name
Set the parameters of this estimator. Modification of the sklearn method to allow unknown kwargs. This allows using the full range of xgboost parameters that are not defined as member variables in sklearn grid search.
self
Plotting Library.
Plot importance based on fitted trees.
booster (Booster, XGBModel or dict) – Booster or XGBModel instance, or dict taken by Booster.get_fscore()
ax (matplotlib Axes, default None) – Target axes instance. If None, new figure and axes will be created.
grid (bool, Turn the axes grids on or off. Default is True (On)) –
importance_type (str, default "weight") –
How the importance is calculated: either “weight”, “gain”, or “cover”
”weight” is the number of times a feature appears in a tree
”gain” is the average gain of splits which use the feature
”cover” is the average coverage of splits which use the feature where coverage is defined as the number of samples affected by the split
max_num_features (int, default None) – Maximum number of top features displayed on plot. If None, all features will be displayed.
height (float, default 0.2) – Bar height, passed to ax.barh()
xlim (tuple, default None) – Tuple passed to axes.xlim()
ylim (tuple, default None) – Tuple passed to axes.ylim()
title (str, default "Feature importance") – Axes title. To disable, pass None.
xlabel (str, default "F score") – X axis title label. To disable, pass None.
ylabel (str, default "Features") – Y axis title label. To disable, pass None.
fmap (str or os.PathLike (optional)) – The name of feature map file.
show_values (bool, default True) – Show values on plot. To disable, pass False.
kwargs – Other keywords passed to ax.barh()
ax
matplotlib Axes
Plot specified tree.
booster (Booster, XGBModel) – Booster or XGBModel instance
fmap (str (optional)) – The name of feature map file
num_trees (int, default 0) – Specify the ordinal number of target tree
rankdir (str, default "TB") – Passed to graphiz via graph_attr
ax (matplotlib Axes, default None) – Target axes instance. If None, new figure and axes will be created.
kwargs – Other keywords passed to to_graphviz
ax
matplotlib Axes
Convert specified tree to graphviz instance. IPython can automatically plot the returned graphiz instance. Otherwise, you should call .render() method of the returned graphiz instance.
booster (Booster, XGBModel) – Booster or XGBModel instance
fmap (str (optional)) – The name of feature map file
num_trees (int, default 0) – Specify the ordinal number of target tree
rankdir (str, default "UT") – Passed to graphiz via graph_attr
yes_color (str, default '#0000FF') – Edge color when meets the node condition.
no_color (str, default '#FF0000') – Edge color when doesn’t meet the node condition.
condition_node_params (dict, optional) –
Condition node configuration for for graphviz. Example:
{'shape': 'box',
'style': 'filled,rounded',
'fillcolor': '#78bceb'}
leaf_node_params (dict, optional) –
Leaf node configuration for graphviz. Example:
{'shape': 'box',
'style': 'filled',
'fillcolor': '#e48038'}
**kwargs (dict, optional) – Other keywords passed to graphviz graph_attr, e.g. graph [ {key} = {value} ]
graph
graphviz.Source
Interface for training callback.
New in version 1.3.0.
Print the evaluation result at each iteration.
New in version 1.3.0.
Callback function for early stopping
New in version 1.3.0.
rounds (int) – Early stopping rounds.
metric_name (Optional[str]) – Name of metric that is used for early stopping.
data_name (Optional[str]) – Name of dataset that is used for early stopping.
maximize (Optional[bool]) – Whether to maximize evaluation metric. None means auto (discouraged).
save_best (Optional[bool]) – Whether training should return the best model or the last model.
Callback function for scheduling learning rate.
New in version 1.3.0.
learning_rates (callable/collections.Sequence) – If it’s a callable object, then it should accept an integer parameter epoch and returns the corresponding learning rate. Otherwise it should be a sequence like list or tuple with the same size of boosting rounds.
Checkpointing operation.
New in version 1.3.0.
directory (os.PathLike) – Output model directory.
name (str) – pattern of output model file. Models will be saved as name_0.json, name_1.json, name_2.json ….
as_pickle (boolean) – When set to Ture, all training parameters will be saved in pickle format, instead of saving only the model.
iterations (int) – Interval of checkpointing. Checkpointing is slow so setting a larger number can reduce performance hit.
Dask extensions for distributed training. See https://xgboost.readthedocs.io/en/latest/tutorials/dask.html for simple tutorial. Also xgboost/demo/dask for some examples.
There are two sets of APIs in this module, one is the functional API including
train
and predict
methods. Another is stateful Scikit-Learner wrapper
inherited from single-node Scikit-Learn interface.
The implementation is heavily influenced by dask_xgboost: https://github.com/dask/dask-xgboost
Bases: object
DMatrix holding on references to Dask DataFrame or Dask Array. Constructing a DaskDMatrix forces all lazy computation to be carried out. Wait for the input data explicitly if you want to see actual computation of constructing DaskDMatrix.
See doc for xgboost.DMatrix
constructor for other parameters. DaskDMatrix
accepts only dask collection.
Note
DaskDMatrix does not repartition or move data between workers. It’s the caller’s responsibility to balance the data.
New in version 1.0.0.
client (distributed.Client) – Specify the dask client used for training. Use default client returned from dask if it’s set to None.
data (Union[da.Array, dd.DataFrame, dd.Series]) –
label (Optional[Union[da.Array, dd.DataFrame, dd.Series]]) –
weight (Optional[Union[da.Array, dd.DataFrame, dd.Series]]) –
base_margin (Optional[Union[da.Array, dd.DataFrame, dd.Series]]) –
missing (float) –
silent (bool) –
feature_types (Optional[Union[List[Any], Any]]) –
group (Optional[Union[da.Array, dd.DataFrame, dd.Series]]) –
qid (Optional[Union[da.Array, dd.DataFrame, dd.Series]]) –
label_lower_bound (Optional[Union[da.Array, dd.DataFrame, dd.Series]]) –
label_upper_bound (Optional[Union[da.Array, dd.DataFrame, dd.Series]]) –
feature_weights (Optional[Union[da.Array, dd.DataFrame, dd.Series]]) –
enable_categorical (bool) –
Bases: xgboost.dask.DaskDMatrix
Specialized data type for gpu_hist tree method. This class is used to reduce the
memory usage by eliminating data copies. Internally the all partitions/chunks of data
are merged by weighted GK sketching. So the number of partitions from dask may affect
training accuracy as GK generates bounded error for each merge. See doc string for
xgboost.DeviceQuantileDMatrix
and xgboost.DMatrix
for other
parameters.
New in version 1.2.0.
max_bin (Number of bins for histogram construction.) –
client (distributed.Client) –
data (Union[da.Array, dd.DataFrame, dd.Series]) –
label (Optional[Union[da.Array, dd.DataFrame, dd.Series]]) –
weight (Optional[Union[da.Array, dd.DataFrame, dd.Series]]) –
base_margin (Optional[Union[da.Array, dd.DataFrame, dd.Series]]) –
missing (float) –
silent (bool) –
feature_types (Optional[Union[List[Any], Any]]) –
group (Optional[Union[da.Array, dd.DataFrame, dd.Series]]) –
qid (Optional[Union[da.Array, dd.DataFrame, dd.Series]]) –
label_lower_bound (Optional[Union[da.Array, dd.DataFrame, dd.Series]]) –
label_upper_bound (Optional[Union[da.Array, dd.DataFrame, dd.Series]]) –
feature_weights (Optional[Union[da.Array, dd.DataFrame, dd.Series]]) –
enable_categorical (bool) –
Train XGBoost model.
New in version 1.0.0.
Note
Other parameters are the same as xgboost.train()
except for
evals_result, which is returned as part of function return value instead of
argument.
client (distributed.Client) – Specify the dask client used for training. Use default client returned from dask if it’s set to None.
params (Dict[str, Any]) –
dtrain (xgboost.dask.DaskDMatrix) –
num_boost_round (int) –
evals (Optional[List[Tuple[xgboost.dask.DaskDMatrix, str]]]) –
obj (Optional[Callable[[numpy.ndarray, xgboost.core.DMatrix], Tuple[numpy.ndarray, numpy.ndarray]]]) –
feval (Optional[Callable[[numpy.ndarray, xgboost.core.DMatrix], Tuple[str, float]]]) –
early_stopping_rounds (Optional[int]) –
xgb_model (Optional[xgboost.core.Booster]) –
callbacks (Optional[List[xgboost.callback.TrainingCallback]]) –
results – A dictionary containing trained booster and evaluation history. history field is the same as eval_result from xgboost.train.
{'booster': xgboost.Booster,
'history': {'train': {'logloss': ['0.48253', '0.35953']},
'eval': {'logloss': ['0.480385', '0.357756']}}}
Run prediction with a trained booster.
Note
Using inplace_predict
might be faster when some features are not needed. See
xgboost.Booster.predict()
for details on various parameters. When output
has more than 2 dimensions (shap value, leaf with strict_shape), input should be
da.Array
or DaskDMatrix
.
New in version 1.0.0.
client (distributed.Client) – Specify the dask client used for training. Use default client returned from dask if it’s set to None.
model (Union[Dict[str, Any], xgboost.core.Booster, distributed.Future]) – The trained model. It can be a distributed.Future so user can pre-scatter it onto all workers.
data (Union[xgboost.dask.DaskDMatrix, da.Array, dd.DataFrame, dd.Series]) – Input data used for prediction. When input is a dataframe object, prediction output is a series.
missing (float) – Used when input data is not DaskDMatrix. Specify the value considered as missing.
output_margin (bool) –
pred_leaf (bool) –
pred_contribs (bool) –
approx_contribs (bool) –
pred_interactions (bool) –
validate_features (bool) –
strict_shape (bool) –
prediction – When input data is dask.array.Array
or DaskDMatrix
, the return value is an
array, when input data is dask.dataframe.DataFrame
, return value can be
dask.dataframe.Series
, dask.dataframe.DataFrame
, depending on the output
shape.
dask.array.Array/dask.dataframe.Series
Inplace prediction. See doc in xgboost.Booster.inplace_predict()
for details.
New in version 1.1.0.
client (distributed.Client) – Specify the dask client used for training. Use default client returned from dask if it’s set to None.
model (Union[Dict[str, Any], xgboost.core.Booster, distributed.Future]) – See xgboost.dask.predict()
for details.
data (Union[da.Array, dd.DataFrame, dd.Series]) – dask collection.
iteration_range (Tuple[int, int]) – See xgboost.Booster.predict()
for details.
predict_type (str) – See xgboost.Booster.inplace_predict()
for details.
missing (float) – Value in the input data which needs to be present as a missing value. If None, defaults to np.nan.
base_margin (Optional[Union[da.Array, dd.DataFrame, dd.Series]]) –
See xgboost.DMatrix
for details. Right now classifier is not well
supported with base_margin as it requires the size of base margin to be n_classes
* n_samples.
New in version 1.4.0.
strict_shape (bool) –
See xgboost.Booster.predict()
for details.
New in version 1.4.0.
validate_features (bool) –
When input data is dask.array.Array
, the return value is an array, when input
data is dask.dataframe.DataFrame
, return value can be
dask.dataframe.Series
, dask.dataframe.DataFrame
, depending on the output
shape.
prediction
Bases: xgboost.dask.DaskScikitLearnBase
, object
Implementation of the scikit-learn API for XGBoost classification.
n_estimators (int) – Number of gradient boosted trees. Equivalent to number of boosting rounds.
max_depth (int) – Maximum tree depth for base learners.
learning_rate (float) – Boosting learning rate (xgb’s “eta”)
verbosity (int) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).
objective (string or callable) – Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below).
booster (string) – Specify which booster to use: gbtree, gblinear or dart.
tree_method (string) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from parameters document.
n_jobs (int) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.
gamma (float) – Minimum loss reduction required to make a further partition on a leaf node of the tree.
min_child_weight (float) – Minimum sum of instance weight(hessian) needed in a child.
max_delta_step (float) – Maximum delta step we allow each tree’s weight estimation to be.
subsample (float) – Subsample ratio of the training instance.
colsample_bytree (float) – Subsample ratio of columns when constructing each tree.
colsample_bylevel (float) – Subsample ratio of columns for each level.
colsample_bynode (float) – Subsample ratio of columns for each split.
reg_alpha (float (xgb's alpha)) – L1 regularization term on weights
reg_lambda (float (xgb's lambda)) – L2 regularization term on weights
scale_pos_weight (float) – Balancing of positive and negative weights.
base_score – The initial prediction score of all instances, global bias.
random_state (int) –
Random number seed.
Note
Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.
missing (float, default np.nan) – Value in the data which needs to be present as a missing value.
num_parallel_tree (int) – Used for boosting random forest.
monotone_constraints (str) – Constraint of variable monotonicity. See tutorial for more information.
interaction_constraints (str) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nest list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information
importance_type (string, default "gain") – The feature importance type for the feature_importances_ property: either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.
gpu_id – Device ordinal.
validate_parameters – Give warnings for unknown parameter.
**kwargs (dict, optional) –
Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here: https://github.com/dmlc/xgboost/blob/master/doc/parameter.rst. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.
Note
**kwargs unsupported by scikit-learn
**kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn.
Return the predicted leaf every tree for each sample.
X_leaves – For each datapoint x in X and for each tree, return the index of the
leaf x ends up in. Leaves are numbered within
[0; 2**(self.max_depth+1))
, possibly with gaps in the numbering.
array_like, shape=[n_samples, n_trees]
The dask client used in this model. The Client object can not be serialized for transmission, so if task is launched from a worker instead of directly from the client process, this attribute needs to be set at that worker.
Coefficients property
Note
Coefficients are defined only for linear learners
Coefficients are only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
coef_
array of shape [n_features]
or [n_classes, n_features]
Return the evaluation results.
If eval_set is passed to the fit function, you can call
evals_result()
to get evaluation results for all passed eval_sets.
When eval_metric is also passed to the fit function, the
evals_result will contain the eval_metrics passed to the fit function.
evals_result
dictionary
Example
param_dist = {'objective':'binary:logistic', 'n_estimators':2}
clf = xgb.XGBModel(**param_dist)
clf.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_test, y_test)],
eval_metric='logloss',
verbose=True)
evals_result = clf.evals_result()
The variable evals_result will contain:
{'validation_0': {'logloss': ['0.604835', '0.531479']},
'validation_1': {'logloss': ['0.41965', '0.17686']}}
Feature importances property
Note
Feature importance is defined only for tree boosters
Feature importance is only defined when the decision tree model is chosen as base learner (booster=gbtree). It is not defined for other base learner types, such as linear learners (booster=gblinear).
feature_importances_
array of shape [n_features]
Fit gradient boosting model.
Note that calling fit()
multiple times will cause the model object to be
re-fit from scratch. To resume training from a previous checkpoint, explicitly
pass xgb_model
argument.
X (array_like) – Feature matrix
y (array_like) – Labels
sample_weight (array_like) – instance weights
base_margin (array_like) – global bias for each instance.
eval_set (list, optional) – A list of (X, y) tuple pairs to use as validation sets, for which metrics will be computed. Validation metrics will help us track the performance of the model.
eval_metric (str, list of str, or callable, optional) – If a str, should be a built-in evaluation metric to use. See
doc/parameter.rst.
If a list of str, should be the list of multiple built-in evaluation metrics
to use.
If callable, a custom evaluation metric. The call signature is
func(y_predicted, y_true)
where y_true
will be a DMatrix object such
that you may need to call the get_label
method. It must return a str,
value pair where the str is a name for the evaluation and value is the value
of the evaluation function. The callable custom objective is always minimized.
early_stopping_rounds (int) – Activates early stopping. Validation metric needs to improve at least once in
every early_stopping_rounds round(s) to continue training.
Requires at least one item in eval_set.
The method returns the model from the last iteration (not the best one).
If there’s more than one item in eval_set, the last entry will be used
for early stopping.
If there’s more than one metric in eval_metric, the last metric will be
used for early stopping.
If early stopping occurs, the model will have three additional fields:
clf.best_score
, clf.best_iteration
and clf.best_ntree_limit
.
verbose (bool) – If verbose and an evaluation set is used, writes the evaluation metric measured on the validation set to stderr.
xgb_model (Optional[Union[xgboost.core.Booster, xgboost.sklearn.XGBModel]]) – file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).
sample_weight_eval_set (list, optional) – A list of the form [L_1, L_2, …, L_n], where each L_i is an array like object storing instance weights for the i-th validation set.
base_margin_eval_set (list, optional) – A list of the form [M_1, M_2, …, M_n], where each M_i is an array like object storing base margin for the i-th validation set.
feature_weights (array_like) – Weight for each feature, defines the probability of each feature being selected when colsample is being used. All values must be greater than 0, otherwise a ValueError is thrown. Only available for hist, gpu_hist and exact tree methods.
callbacks (list of callback functions) –
List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API. Example:
callbacks = [xgb.callback.EarlyStopping(rounds=early_stopping_rounds,
save_best=True)]
Get the underlying xgboost Booster of this model.
This will raise an exception when fit was not called
booster
a xgboost booster of underlying model
Gets the number of xgboost boosting rounds.
Get parameters.
Get xgboost specific parameters.
Intercept (bias) property
Note
Intercept is defined only for linear learners
Intercept (bias) is only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
intercept_
array of shape (1,)
or [n_classes]
Load the model from a file.
The model is loaded from an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be loaded.
fname (string) – Input file name.
Predict with X.
Note
This function is only thread safe for gbtree and dart.
X (array_like) – Data to predict with
output_margin (bool) – Whether to output the raw untransformed margin value.
ntree_limit (int) – Deprecated, use iteration_range instead.
validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.
base_margin (array_like) – Margin added to prediction.
iteration_range (Optional[Tuple[int, int]]) –
Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.
New in version 1.4.0.
prediction
numpy array
Predict the probability of each X example being of a given class.
Note
This function is only thread safe for gbtree and dart.
X (array_like) – Feature matrix.
ntree_limit (int) – Deprecated, use iteration_range instead.
validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.
base_margin (array_like) – Margin added to prediction.
iteration_range (Optional[Tuple[int, int]]) – Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.
prediction – a numpy array of shape array-like of shape (n_samples, n_classes) with the probability of each data example being of a given class.
numpy array
Save the model to a file.
The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be saved.
fname (string) – Output file name
Set the parameters of this estimator. Modification of the sklearn method to allow unknown kwargs. This allows using the full range of xgboost parameters that are not defined as member variables in sklearn grid search.
self
Bases: xgboost.dask.DaskScikitLearnBase
, object
Implementation of the Scikit-Learn API for XGBoost.
n_estimators (int) – Number of gradient boosted trees. Equivalent to number of boosting rounds.
max_depth (int) – Maximum tree depth for base learners.
learning_rate (float) – Boosting learning rate (xgb’s “eta”)
verbosity (int) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).
objective (string or callable) – Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below).
booster (string) – Specify which booster to use: gbtree, gblinear or dart.
tree_method (string) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from parameters document.
n_jobs (int) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.
gamma (float) – Minimum loss reduction required to make a further partition on a leaf node of the tree.
min_child_weight (float) – Minimum sum of instance weight(hessian) needed in a child.
max_delta_step (float) – Maximum delta step we allow each tree’s weight estimation to be.
subsample (float) – Subsample ratio of the training instance.
colsample_bytree (float) – Subsample ratio of columns when constructing each tree.
colsample_bylevel (float) – Subsample ratio of columns for each level.
colsample_bynode (float) – Subsample ratio of columns for each split.
reg_alpha (float (xgb's alpha)) – L1 regularization term on weights
reg_lambda (float (xgb's lambda)) – L2 regularization term on weights
scale_pos_weight (float) – Balancing of positive and negative weights.
base_score – The initial prediction score of all instances, global bias.
random_state (int) –
Random number seed.
Note
Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.
missing (float, default np.nan) – Value in the data which needs to be present as a missing value.
num_parallel_tree (int) – Used for boosting random forest.
monotone_constraints (str) – Constraint of variable monotonicity. See tutorial for more information.
interaction_constraints (str) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nest list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information
importance_type (string, default "gain") – The feature importance type for the feature_importances_ property: either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.
gpu_id – Device ordinal.
validate_parameters – Give warnings for unknown parameter.
**kwargs (dict, optional) –
Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here: https://github.com/dmlc/xgboost/blob/master/doc/parameter.rst. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.
Note
**kwargs unsupported by scikit-learn
**kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn.
Return the predicted leaf every tree for each sample.
X_leaves – For each datapoint x in X and for each tree, return the index of the
leaf x ends up in. Leaves are numbered within
[0; 2**(self.max_depth+1))
, possibly with gaps in the numbering.
array_like, shape=[n_samples, n_trees]
The dask client used in this model. The Client object can not be serialized for transmission, so if task is launched from a worker instead of directly from the client process, this attribute needs to be set at that worker.
Coefficients property
Note
Coefficients are defined only for linear learners
Coefficients are only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
coef_
array of shape [n_features]
or [n_classes, n_features]
Return the evaluation results.
If eval_set is passed to the fit function, you can call
evals_result()
to get evaluation results for all passed eval_sets.
When eval_metric is also passed to the fit function, the
evals_result will contain the eval_metrics passed to the fit function.
evals_result
dictionary
Example
param_dist = {'objective':'binary:logistic', 'n_estimators':2}
clf = xgb.XGBModel(**param_dist)
clf.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_test, y_test)],
eval_metric='logloss',
verbose=True)
evals_result = clf.evals_result()
The variable evals_result will contain:
{'validation_0': {'logloss': ['0.604835', '0.531479']},
'validation_1': {'logloss': ['0.41965', '0.17686']}}
Feature importances property
Note
Feature importance is defined only for tree boosters
Feature importance is only defined when the decision tree model is chosen as base learner (booster=gbtree). It is not defined for other base learner types, such as linear learners (booster=gblinear).
feature_importances_
array of shape [n_features]
Fit gradient boosting model.
Note that calling fit()
multiple times will cause the model object to be
re-fit from scratch. To resume training from a previous checkpoint, explicitly
pass xgb_model
argument.
X (array_like) – Feature matrix
y (array_like) – Labels
sample_weight (array_like) – instance weights
base_margin (array_like) – global bias for each instance.
eval_set (list, optional) – A list of (X, y) tuple pairs to use as validation sets, for which metrics will be computed. Validation metrics will help us track the performance of the model.
eval_metric (str, list of str, or callable, optional) – If a str, should be a built-in evaluation metric to use. See
doc/parameter.rst.
If a list of str, should be the list of multiple built-in evaluation metrics
to use.
If callable, a custom evaluation metric. The call signature is
func(y_predicted, y_true)
where y_true
will be a DMatrix object such
that you may need to call the get_label
method. It must return a str,
value pair where the str is a name for the evaluation and value is the value
of the evaluation function. The callable custom objective is always minimized.
early_stopping_rounds (int) – Activates early stopping. Validation metric needs to improve at least once in
every early_stopping_rounds round(s) to continue training.
Requires at least one item in eval_set.
The method returns the model from the last iteration (not the best one).
If there’s more than one item in eval_set, the last entry will be used
for early stopping.
If there’s more than one metric in eval_metric, the last metric will be
used for early stopping.
If early stopping occurs, the model will have three additional fields:
clf.best_score
, clf.best_iteration
and clf.best_ntree_limit
.
verbose (bool) – If verbose and an evaluation set is used, writes the evaluation metric measured on the validation set to stderr.
xgb_model (Optional[Union[xgboost.core.Booster, xgboost.sklearn.XGBModel]]) – file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).
sample_weight_eval_set (list, optional) – A list of the form [L_1, L_2, …, L_n], where each L_i is an array like object storing instance weights for the i-th validation set.
base_margin_eval_set (list, optional) – A list of the form [M_1, M_2, …, M_n], where each M_i is an array like object storing base margin for the i-th validation set.
feature_weights (array_like) – Weight for each feature, defines the probability of each feature being selected when colsample is being used. All values must be greater than 0, otherwise a ValueError is thrown. Only available for hist, gpu_hist and exact tree methods.
callbacks (list of callback functions) –
List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API. Example:
callbacks = [xgb.callback.EarlyStopping(rounds=early_stopping_rounds,
save_best=True)]
Get the underlying xgboost Booster of this model.
This will raise an exception when fit was not called
booster
a xgboost booster of underlying model
Gets the number of xgboost boosting rounds.
Get parameters.
Get xgboost specific parameters.
Intercept (bias) property
Note
Intercept is defined only for linear learners
Intercept (bias) is only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
intercept_
array of shape (1,)
or [n_classes]
Load the model from a file.
The model is loaded from an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be loaded.
fname (string) – Input file name.
Predict with X.
Note
This function is only thread safe for gbtree and dart.
X (array_like) – Data to predict with
output_margin (bool) – Whether to output the raw untransformed margin value.
ntree_limit (int) – Deprecated, use iteration_range instead.
validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.
base_margin (array_like) – Margin added to prediction.
iteration_range (Optional[Tuple[int, int]]) –
Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.
New in version 1.4.0.
prediction
numpy array
Save the model to a file.
The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be saved.
fname (string) – Output file name
Set the parameters of this estimator. Modification of the sklearn method to allow unknown kwargs. This allows using the full range of xgboost parameters that are not defined as member variables in sklearn grid search.
self
Bases: xgboost.dask.DaskScikitLearnBase
, xgboost.sklearn.XGBRankerMixIn
Implementation of the Scikit-Learn API for XGBoost Ranking.
New in version 1.4.0.
n_estimators (int) – Number of gradient boosted trees. Equivalent to number of boosting rounds.
max_depth (int) – Maximum tree depth for base learners.
learning_rate (float) – Boosting learning rate (xgb’s “eta”)
verbosity (int) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).
objective (string or callable) – Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below).
booster (string) – Specify which booster to use: gbtree, gblinear or dart.
tree_method (string) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from parameters document.
n_jobs (int) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.
gamma (float) – Minimum loss reduction required to make a further partition on a leaf node of the tree.
min_child_weight (float) – Minimum sum of instance weight(hessian) needed in a child.
max_delta_step (float) – Maximum delta step we allow each tree’s weight estimation to be.
subsample (float) – Subsample ratio of the training instance.
colsample_bytree (float) – Subsample ratio of columns when constructing each tree.
colsample_bylevel (float) – Subsample ratio of columns for each level.
colsample_bynode (float) – Subsample ratio of columns for each split.
reg_alpha (float (xgb's alpha)) – L1 regularization term on weights
reg_lambda (float (xgb's lambda)) – L2 regularization term on weights
scale_pos_weight (float) – Balancing of positive and negative weights.
base_score – The initial prediction score of all instances, global bias.
random_state (int) –
Random number seed.
Note
Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.
missing (float, default np.nan) – Value in the data which needs to be present as a missing value.
num_parallel_tree (int) – Used for boosting random forest.
monotone_constraints (str) – Constraint of variable monotonicity. See tutorial for more information.
interaction_constraints (str) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nest list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information
importance_type (string, default "gain") – The feature importance type for the feature_importances_ property: either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.
gpu_id – Device ordinal.
validate_parameters – Give warnings for unknown parameter.
**kwargs (dict, optional) –
Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here: https://github.com/dmlc/xgboost/blob/master/doc/parameter.rst. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.
Note
**kwargs unsupported by scikit-learn
**kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn.
Note
For dask implementation, group is not supported, use qid instead.
kwargs (Any) –
Return the predicted leaf every tree for each sample.
X_leaves – For each datapoint x in X and for each tree, return the index of the
leaf x ends up in. Leaves are numbered within
[0; 2**(self.max_depth+1))
, possibly with gaps in the numbering.
array_like, shape=[n_samples, n_trees]
The dask client used in this model. The Client object can not be serialized for transmission, so if task is launched from a worker instead of directly from the client process, this attribute needs to be set at that worker.
Coefficients property
Note
Coefficients are defined only for linear learners
Coefficients are only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
coef_
array of shape [n_features]
or [n_classes, n_features]
Return the evaluation results.
If eval_set is passed to the fit function, you can call
evals_result()
to get evaluation results for all passed eval_sets.
When eval_metric is also passed to the fit function, the
evals_result will contain the eval_metrics passed to the fit function.
evals_result
dictionary
Example
param_dist = {'objective':'binary:logistic', 'n_estimators':2}
clf = xgb.XGBModel(**param_dist)
clf.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_test, y_test)],
eval_metric='logloss',
verbose=True)
evals_result = clf.evals_result()
The variable evals_result will contain:
{'validation_0': {'logloss': ['0.604835', '0.531479']},
'validation_1': {'logloss': ['0.41965', '0.17686']}}
Feature importances property
Note
Feature importance is defined only for tree boosters
Feature importance is only defined when the decision tree model is chosen as base learner (booster=gbtree). It is not defined for other base learner types, such as linear learners (booster=gblinear).
feature_importances_
array of shape [n_features]
Fit gradient boosting ranker
Note that calling fit()
multiple times will cause the model object to be
re-fit from scratch. To resume training from a previous checkpoint, explicitly
pass xgb_model
argument.
X (array_like) – Feature matrix
y (array_like) – Labels
group (array_like) – Size of each query group of training data. Should have as many elements as the query groups in the training data. If this is set to None, then user must provide qid.
qid (array_like) – Query ID for each training sample. Should have the size of n_samples. If this is set to None, then user must provide group.
sample_weight (array_like) –
Query group weights
Note
Weights are per-group for ranking tasks
In ranking task, one weight is assigned to each query group/id (not each data point). This is because we only care about the relative ordering of data points within each group, so it doesn’t make sense to assign weights to individual data points.
base_margin (array_like) – Global bias for each instance.
eval_set (list, optional) – A list of (X, y) tuple pairs to use as validation sets, for which metrics will be computed. Validation metrics will help us track the performance of the model.
eval_group (list of arrays, optional) – A list in which eval_group[i]
is the list containing the sizes of all
query groups in the i
-th pair in eval_set.
eval_qid (list of array_like, optional) – A list in which eval_qid[i]
is the array containing query ID of i
-th
pair in eval_set.
eval_metric (str, list of str, optional) – If a str, should be a built-in evaluation metric to use. See doc/parameter.rst. If a list of str, should be the list of multiple built-in evaluation metrics to use. The custom evaluation metric is not yet supported for the ranker.
early_stopping_rounds (int) – Activates early stopping. Validation metric needs to improve at least once in
every early_stopping_rounds round(s) to continue training. Requires at
least one item in eval_set.
The method returns the model from the last iteration (not the best one). If
there’s more than one item in eval_set, the last entry will be used for
early stopping.
If there’s more than one metric in eval_metric, the last metric will be
used for early stopping.
If early stopping occurs, the model will have three additional fields:
clf.best_score
, clf.best_iteration
and clf.best_ntree_limit
.
verbose (bool) – If verbose and an evaluation set is used, writes the evaluation metric measured on the validation set to stderr.
xgb_model (Optional[Union[xgboost.core.Booster, xgboost.sklearn.XGBModel]]) – file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).
sample_weight_eval_set (list, optional) –
A list of the form [L_1, L_2, …, L_n], where each L_i is a list of group weights on the i-th validation set.
Note
Weights are per-group for ranking tasks
In ranking task, one weight is assigned to each query group (not each data point). This is because we only care about the relative ordering of data points within each group, so it doesn’t make sense to assign weights to individual data points.
base_margin_eval_set (list, optional) – A list of the form [M_1, M_2, …, M_n], where each M_i is an array like object storing base margin for the i-th validation set.
feature_weights (array_like) – Weight for each feature, defines the probability of each feature being selected when colsample is being used. All values must be greater than 0, otherwise a ValueError is thrown. Only available for hist, gpu_hist and exact tree methods.
callbacks (list of callback functions) –
List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API. Example:
callbacks = [xgb.callback.EarlyStopping(rounds=early_stopping_rounds,
save_best=True)]
Get the underlying xgboost Booster of this model.
This will raise an exception when fit was not called
booster
a xgboost booster of underlying model
Gets the number of xgboost boosting rounds.
Get parameters.
Get xgboost specific parameters.
Intercept (bias) property
Note
Intercept is defined only for linear learners
Intercept (bias) is only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
intercept_
array of shape (1,)
or [n_classes]
Load the model from a file.
The model is loaded from an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be loaded.
fname (string) – Input file name.
Predict with X.
Note
This function is only thread safe for gbtree and dart.
X (array_like) – Data to predict with
output_margin (bool) – Whether to output the raw untransformed margin value.
ntree_limit (int) – Deprecated, use iteration_range instead.
validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.
base_margin (array_like) – Margin added to prediction.
iteration_range (Optional[Tuple[int, int]]) –
Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.
New in version 1.4.0.
prediction
numpy array
Save the model to a file.
The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be saved.
fname (string) – Output file name
Set the parameters of this estimator. Modification of the sklearn method to allow unknown kwargs. This allows using the full range of xgboost parameters that are not defined as member variables in sklearn grid search.
self
Bases: xgboost.dask.DaskXGBRegressor
Implementation of the Scikit-Learn API for XGBoost Random Forest Regressor.
New in version 1.4.0.
n_estimators (int) – Number of trees in random forest to fit.
max_depth (int) – Maximum tree depth for base learners.
learning_rate (float) – Boosting learning rate (xgb’s “eta”)
verbosity (int) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).
objective (string or callable) – Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below).
booster (string) – Specify which booster to use: gbtree, gblinear or dart.
tree_method (string) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from parameters document.
n_jobs (int) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.
gamma (float) – Minimum loss reduction required to make a further partition on a leaf node of the tree.
min_child_weight (float) – Minimum sum of instance weight(hessian) needed in a child.
max_delta_step (float) – Maximum delta step we allow each tree’s weight estimation to be.
subsample (float) – Subsample ratio of the training instance.
colsample_bytree (float) – Subsample ratio of columns when constructing each tree.
colsample_bylevel (float) – Subsample ratio of columns for each level.
colsample_bynode (float) – Subsample ratio of columns for each split.
reg_alpha (float (xgb's alpha)) – L1 regularization term on weights
reg_lambda (float (xgb's lambda)) – L2 regularization term on weights
scale_pos_weight (float) – Balancing of positive and negative weights.
base_score – The initial prediction score of all instances, global bias.
random_state (int) –
Random number seed.
Note
Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.
missing (float, default np.nan) – Value in the data which needs to be present as a missing value.
num_parallel_tree (int) – Used for boosting random forest.
monotone_constraints (str) – Constraint of variable monotonicity. See tutorial for more information.
interaction_constraints (str) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nest list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information
importance_type (string, default "gain") – The feature importance type for the feature_importances_ property: either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.
gpu_id – Device ordinal.
validate_parameters – Give warnings for unknown parameter.
**kwargs (dict, optional) –
Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here: https://github.com/dmlc/xgboost/blob/master/doc/parameter.rst. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.
Note
**kwargs unsupported by scikit-learn
**kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn.
Note
Custom objective function
A custom objective function can be provided for the objective
parameter. In this case, it should have the signature
objective(y_true, y_pred) -> grad, hess
:
The target values
The predicted values
The value of the gradient for each sample point.
The value of the second derivative for each sample point
kwargs (Any) –
Return the predicted leaf every tree for each sample.
X_leaves – For each datapoint x in X and for each tree, return the index of the
leaf x ends up in. Leaves are numbered within
[0; 2**(self.max_depth+1))
, possibly with gaps in the numbering.
array_like, shape=[n_samples, n_trees]
The dask client used in this model. The Client object can not be serialized for transmission, so if task is launched from a worker instead of directly from the client process, this attribute needs to be set at that worker.
Coefficients property
Note
Coefficients are defined only for linear learners
Coefficients are only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
coef_
array of shape [n_features]
or [n_classes, n_features]
Return the evaluation results.
If eval_set is passed to the fit function, you can call
evals_result()
to get evaluation results for all passed eval_sets.
When eval_metric is also passed to the fit function, the
evals_result will contain the eval_metrics passed to the fit function.
evals_result
dictionary
Example
param_dist = {'objective':'binary:logistic', 'n_estimators':2}
clf = xgb.XGBModel(**param_dist)
clf.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_test, y_test)],
eval_metric='logloss',
verbose=True)
evals_result = clf.evals_result()
The variable evals_result will contain:
{'validation_0': {'logloss': ['0.604835', '0.531479']},
'validation_1': {'logloss': ['0.41965', '0.17686']}}
Feature importances property
Note
Feature importance is defined only for tree boosters
Feature importance is only defined when the decision tree model is chosen as base learner (booster=gbtree). It is not defined for other base learner types, such as linear learners (booster=gblinear).
feature_importances_
array of shape [n_features]
Fit gradient boosting model.
Note that calling fit()
multiple times will cause the model object to be
re-fit from scratch. To resume training from a previous checkpoint, explicitly
pass xgb_model
argument.
X (array_like) – Feature matrix
y (array_like) – Labels
sample_weight (array_like) – instance weights
base_margin (array_like) – global bias for each instance.
eval_set (list, optional) – A list of (X, y) tuple pairs to use as validation sets, for which metrics will be computed. Validation metrics will help us track the performance of the model.
eval_metric (str, list of str, or callable, optional) – If a str, should be a built-in evaluation metric to use. See
doc/parameter.rst.
If a list of str, should be the list of multiple built-in evaluation metrics
to use.
If callable, a custom evaluation metric. The call signature is
func(y_predicted, y_true)
where y_true
will be a DMatrix object such
that you may need to call the get_label
method. It must return a str,
value pair where the str is a name for the evaluation and value is the value
of the evaluation function. The callable custom objective is always minimized.
early_stopping_rounds (int) – Activates early stopping. Validation metric needs to improve at least once in
every early_stopping_rounds round(s) to continue training.
Requires at least one item in eval_set.
The method returns the model from the last iteration (not the best one).
If there’s more than one item in eval_set, the last entry will be used
for early stopping.
If there’s more than one metric in eval_metric, the last metric will be
used for early stopping.
If early stopping occurs, the model will have three additional fields:
clf.best_score
, clf.best_iteration
and clf.best_ntree_limit
.
verbose (bool) – If verbose and an evaluation set is used, writes the evaluation metric measured on the validation set to stderr.
xgb_model (Optional[Union[xgboost.core.Booster, xgboost.sklearn.XGBModel]]) – file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).
sample_weight_eval_set (list, optional) – A list of the form [L_1, L_2, …, L_n], where each L_i is an array like object storing instance weights for the i-th validation set.
base_margin_eval_set (list, optional) – A list of the form [M_1, M_2, …, M_n], where each M_i is an array like object storing base margin for the i-th validation set.
feature_weights (array_like) – Weight for each feature, defines the probability of each feature being selected when colsample is being used. All values must be greater than 0, otherwise a ValueError is thrown. Only available for hist, gpu_hist and exact tree methods.
callbacks (list of callback functions) –
List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API. Example:
callbacks = [xgb.callback.EarlyStopping(rounds=early_stopping_rounds,
save_best=True)]
Get the underlying xgboost Booster of this model.
This will raise an exception when fit was not called
booster
a xgboost booster of underlying model
Get parameters.
Intercept (bias) property
Note
Intercept is defined only for linear learners
Intercept (bias) is only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
intercept_
array of shape (1,)
or [n_classes]
Load the model from a file.
The model is loaded from an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be loaded.
fname (string) – Input file name.
Predict with X.
Note
This function is only thread safe for gbtree and dart.
X (array_like) – Data to predict with
output_margin (bool) – Whether to output the raw untransformed margin value.
ntree_limit (int) – Deprecated, use iteration_range instead.
validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.
base_margin (array_like) – Margin added to prediction.
iteration_range (Optional[Tuple[int, int]]) –
Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.
New in version 1.4.0.
prediction
numpy array
Save the model to a file.
The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be saved.
fname (string) – Output file name
Set the parameters of this estimator. Modification of the sklearn method to allow unknown kwargs. This allows using the full range of xgboost parameters that are not defined as member variables in sklearn grid search.
self
Bases: xgboost.dask.DaskXGBClassifier
Implementation of the Scikit-Learn API for XGBoost Random Forest Classifier.
New in version 1.4.0.
n_estimators (int) – Number of trees in random forest to fit.
max_depth (int) – Maximum tree depth for base learners.
learning_rate (float) – Boosting learning rate (xgb’s “eta”)
verbosity (int) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug).
objective (string or callable) – Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below).
booster (string) – Specify which booster to use: gbtree, gblinear or dart.
tree_method (string) – Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It’s recommended to study this option from parameters document.
n_jobs (int) – Number of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow down both algorithms.
gamma (float) – Minimum loss reduction required to make a further partition on a leaf node of the tree.
min_child_weight (float) – Minimum sum of instance weight(hessian) needed in a child.
max_delta_step (float) – Maximum delta step we allow each tree’s weight estimation to be.
subsample (float) – Subsample ratio of the training instance.
colsample_bytree (float) – Subsample ratio of columns when constructing each tree.
colsample_bylevel (float) – Subsample ratio of columns for each level.
colsample_bynode (float) – Subsample ratio of columns for each split.
reg_alpha (float (xgb's alpha)) – L1 regularization term on weights
reg_lambda (float (xgb's lambda)) – L2 regularization term on weights
scale_pos_weight (float) – Balancing of positive and negative weights.
base_score – The initial prediction score of all instances, global bias.
random_state (int) –
Random number seed.
Note
Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm.
missing (float, default np.nan) – Value in the data which needs to be present as a missing value.
num_parallel_tree (int) – Used for boosting random forest.
monotone_constraints (str) – Constraint of variable monotonicity. See tutorial for more information.
interaction_constraints (str) – Constraints for interaction representing permitted interactions. The constraints must be specified in the form of a nest list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information
importance_type (string, default "gain") – The feature importance type for the feature_importances_ property: either “gain”, “weight”, “cover”, “total_gain” or “total_cover”.
gpu_id – Device ordinal.
validate_parameters – Give warnings for unknown parameter.
**kwargs (dict, optional) –
Keyword arguments for XGBoost Booster object. Full documentation of parameters can be found here: https://github.com/dmlc/xgboost/blob/master/doc/parameter.rst. Attempting to set a parameter via the constructor args and **kwargs dict simultaneously will result in a TypeError.
Note
**kwargs unsupported by scikit-learn
**kwargs is unsupported by scikit-learn. We do not guarantee that parameters passed via this argument will interact properly with scikit-learn.
Note
Custom objective function
A custom objective function can be provided for the objective
parameter. In this case, it should have the signature
objective(y_true, y_pred) -> grad, hess
:
The target values
The predicted values
The value of the gradient for each sample point.
The value of the second derivative for each sample point
kwargs (Any) –
Return the predicted leaf every tree for each sample.
X_leaves – For each datapoint x in X and for each tree, return the index of the
leaf x ends up in. Leaves are numbered within
[0; 2**(self.max_depth+1))
, possibly with gaps in the numbering.
array_like, shape=[n_samples, n_trees]
The dask client used in this model. The Client object can not be serialized for transmission, so if task is launched from a worker instead of directly from the client process, this attribute needs to be set at that worker.
Coefficients property
Note
Coefficients are defined only for linear learners
Coefficients are only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
coef_
array of shape [n_features]
or [n_classes, n_features]
Return the evaluation results.
If eval_set is passed to the fit function, you can call
evals_result()
to get evaluation results for all passed eval_sets.
When eval_metric is also passed to the fit function, the
evals_result will contain the eval_metrics passed to the fit function.
evals_result
dictionary
Example
param_dist = {'objective':'binary:logistic', 'n_estimators':2}
clf = xgb.XGBModel(**param_dist)
clf.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_test, y_test)],
eval_metric='logloss',
verbose=True)
evals_result = clf.evals_result()
The variable evals_result will contain:
{'validation_0': {'logloss': ['0.604835', '0.531479']},
'validation_1': {'logloss': ['0.41965', '0.17686']}}
Feature importances property
Note
Feature importance is defined only for tree boosters
Feature importance is only defined when the decision tree model is chosen as base learner (booster=gbtree). It is not defined for other base learner types, such as linear learners (booster=gblinear).
feature_importances_
array of shape [n_features]
Fit gradient boosting model.
Note that calling fit()
multiple times will cause the model object to be
re-fit from scratch. To resume training from a previous checkpoint, explicitly
pass xgb_model
argument.
X (array_like) – Feature matrix
y (array_like) – Labels
sample_weight (array_like) – instance weights
base_margin (array_like) – global bias for each instance.
eval_set (list, optional) – A list of (X, y) tuple pairs to use as validation sets, for which metrics will be computed. Validation metrics will help us track the performance of the model.
eval_metric (str, list of str, or callable, optional) – If a str, should be a built-in evaluation metric to use. See
doc/parameter.rst.
If a list of str, should be the list of multiple built-in evaluation metrics
to use.
If callable, a custom evaluation metric. The call signature is
func(y_predicted, y_true)
where y_true
will be a DMatrix object such
that you may need to call the get_label
method. It must return a str,
value pair where the str is a name for the evaluation and value is the value
of the evaluation function. The callable custom objective is always minimized.
early_stopping_rounds (int) – Activates early stopping. Validation metric needs to improve at least once in
every early_stopping_rounds round(s) to continue training.
Requires at least one item in eval_set.
The method returns the model from the last iteration (not the best one).
If there’s more than one item in eval_set, the last entry will be used
for early stopping.
If there’s more than one metric in eval_metric, the last metric will be
used for early stopping.
If early stopping occurs, the model will have three additional fields:
clf.best_score
, clf.best_iteration
and clf.best_ntree_limit
.
verbose (bool) – If verbose and an evaluation set is used, writes the evaluation metric measured on the validation set to stderr.
xgb_model (Optional[Union[xgboost.core.Booster, xgboost.sklearn.XGBModel]]) – file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).
sample_weight_eval_set (list, optional) – A list of the form [L_1, L_2, …, L_n], where each L_i is an array like object storing instance weights for the i-th validation set.
base_margin_eval_set (list, optional) – A list of the form [M_1, M_2, …, M_n], where each M_i is an array like object storing base margin for the i-th validation set.
feature_weights (array_like) – Weight for each feature, defines the probability of each feature being selected when colsample is being used. All values must be greater than 0, otherwise a ValueError is thrown. Only available for hist, gpu_hist and exact tree methods.
callbacks (list of callback functions) –
List of callback functions that are applied at end of each iteration. It is possible to use predefined callbacks by using Callback API. Example:
callbacks = [xgb.callback.EarlyStopping(rounds=early_stopping_rounds,
save_best=True)]
Get the underlying xgboost Booster of this model.
This will raise an exception when fit was not called
booster
a xgboost booster of underlying model
Get parameters.
Intercept (bias) property
Note
Intercept is defined only for linear learners
Intercept (bias) is only defined when the linear model is chosen as base learner (booster=gblinear). It is not defined for other base learner types, such as tree learners (booster=gbtree).
intercept_
array of shape (1,)
or [n_classes]
Load the model from a file.
The model is loaded from an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be loaded.
fname (string) – Input file name.
Predict with X.
Note
This function is only thread safe for gbtree and dart.
X (array_like) – Data to predict with
output_margin (bool) – Whether to output the raw untransformed margin value.
ntree_limit (int) – Deprecated, use iteration_range instead.
validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.
base_margin (array_like) – Margin added to prediction.
iteration_range (Optional[Tuple[int, int]]) –
Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.
New in version 1.4.0.
prediction
numpy array
Predict the probability of each X example being of a given class.
Note
This function is only thread safe for gbtree and dart.
X (array_like) – Feature matrix.
ntree_limit (int) – Deprecated, use iteration_range instead.
validate_features (bool) – When this is True, validate that the Booster’s and data’s feature_names are identical. Otherwise, it is assumed that the feature_names are the same.
base_margin (array_like) – Margin added to prediction.
iteration_range (Optional[Tuple[int, int]]) – Specifies which layer of trees are used in prediction. For example, if a random forest is trained with 100 rounds. Specifying iteration_range=(10, 20), then only the forests built during [10, 20) (half open set) rounds are used in this prediction.
prediction – a numpy array of shape array-like of shape (n_samples, n_classes) with the probability of each data example being of a given class.
numpy array
Save the model to a file.
The model is saved in an XGBoost internal format which is universal among the various XGBoost interfaces. Auxiliary attributes of the Python Booster object (such as feature names) will not be saved.
fname (string) – Output file name
Set the parameters of this estimator. Modification of the sklearn method to allow unknown kwargs. This allows using the full range of xgboost parameters that are not defined as member variables in sklearn grid search.
self