Gradient boosting is one of the most effective techniques for building machine learning models. It is based on the idea of improving the weak learners (learners with insufficient predictive power).
Do you want to learn more about machine learning with R? Check our complete guide to decision trees.
Navigate to a section:
The general idea behind gradient boosting is combining weak learners to produce a more accurate model. These “weak learners” are essentially decision trees, and gradient boosting aims to combine multiple decision trees to lower the model error somehow.
The term “boosting” was introduced the first time successfully in AdaBoost (Adaptive Boosting). This algorithm combines multiple single split decision trees. AdaBoost puts more emphasis on observations that are more difficult to classify by adding new weak learners where needed.
In a nutshell, gradient boosting is comprised of only three elements:
You now know the basics of gradient boosting. The following section will introduce the most popular gradient boosting algorithm – XGBoost.
XGBoost stands for eXtreme Gradient Boosting and represents the algorithm that wins most of the Kaggle competitions. It is an algorithm specifically designed to implement state-of-the-art results fast.
XGBoost is used both in regression and classification as a go-to algorithm. As the name suggests, it utilizes the gradient boosting technique to accomplish enviable results – by adding more and more weak learners until no further improvement can be made.
Today you’ll learn how to use the XGBoost algorithm with R by modeling one of the most trivial datasets – the Iris dataset – starting from the next section.
As mentioned earlier, the Iris dataset will be used to demonstrate how the XGBoost algorithm works. Let’s start simple with a necessary first step – library and dataset imports. You’ll need only a few, and the dataset is built into R:
Here’s how the first couple of rows look like:
There’s no point in further exploration of the dataset, as anyone in the world of data already knows everything about it.
The next step is dataset splitting into training and testing subsets. The following code snippet splits the dataset in a 70:30 ratio and then further splits the dataset in features (X) and target (y) for both subsets. This step is necessary for the training process:
You now have everything needed to start with the training process. Let’s do that in the next section.
XGBoost uses something knows as a DMatrix to store data. DMatrix is nothing but a specific data structure used to store data in a way optimized for both memory efficiency and training speed.
Besides the DMatrix, you’ll also have to specify the parameters for the XGBoost model. You can learn more about all the available parameters here, but we’ll stick to a subset of the most basic ones.
The following snippet shows you how to construct DMatrix data structures for both training and testing subsets and how to build a list of parameters:
Now you have everything needed to build a model. Here’s how:
The results of calling
xgb_model are displayed below:
And that’s all you have to do to train your first gradient boosting model! You’ll learn how to evaluate it in the next section.
You can use the
predict() function to make predictions with the XGBoost model, just as with any other model. The next step is to covert the predictions to a data frame and assign column names, as the predictions are returned in the form of probabilities:
Here’s what the above code snippet produces:
As you would imagine, these probabilities add up to 1 for a single row. The column with the highest probability is the flower species predicted by the model.
Still, it would be nice to have two additional columns. The first one represents the predicted class (max of predicted probabilities). The other represents the actual class, so we can estimate how good the model performs on previously unseen data.
The following snippet does just that:
The results are displayed in the following figure:
Things look promising, to say at least, but that’s no reason to jump to conclusions. Next, we can calculate the overall accuracy score as a sum of instances where predicted and actual classes match divided by the total number of rows:
Executing the above code prints out 0.9333 to the console, indicating we have a 93% accurate model on previously unseen data.
While we’re here, we can also print the confusion matrix to see what exactly did the model misclassify:
The results are shown below:
As you can see, only three virginica species were classified as versicolor. There were no misclassifications in the setosa species.
And that’s how you can train and evaluate XGBoost models with R. Let’s wrap things up in the next section.
XGBoost is a complex state-of-the-art algorithm for both classification and regression – thankfully, with a simple R API. Entire books are written on this single algorithm alone, so cramming everything in a single article isn’t possible.
You’ve still learned a lot – from the basic theory and intuition to implementation and evaluation in R. If you want to learn more, please stay tuned to the Appsilon blog. More guides on the topic are expected in the following weeks.
If you want to implement machine learning in your organization, you can always reach out to Appsilon for help.
Appsilon is hiring for remote roles! See our Careers page for all open positions, including R Shiny Developers, Fullstack Engineers, Frontend Engineers, a Senior Infrastructure Engineer, and a Community Manager. Join Appsilon and work on groundbreaking projects with the world’s most influential Fortune 500 companies.