Binary Classification Model for Company Bankruptcy Prediction Using XGBoost Take 2

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Company Bankruptcy Prediction dataset is a binary classification situation where we attempt to predict one of the two possible outcomes.

INTRODUCTION: The research team collected the data from the Taiwan Economic Journal from 1999 to 2009. Company bankruptcy was defined based on the business regulations of the Taiwan Stock Exchange. Because not catching companies in a shaky financial situation is a costly business proposition, we will maximize the precision and recall ratios with the F1 score.

The data analysis first appeared on the research paper, Liang, D., Lu, C.-C., Tsai, C.-F., and Shih, G.-A. (2016) Financial Ratios and Corporate Governance Indicators in Bankruptcy Prediction: A Comprehensive Study. European Journal of Operational Research, vol. 252, no. 2, pp. 561-572.

In iteration Take1, we constructed and tuned several classic machine learning models using the Scikit-Learn library. We also observed the best results that we could obtain from the models.

This Take2 iteration will construct and tune an XGBoost model. We also will observe the best results that we can obtain from the models.

ANALYSIS: In iteration Take1, the machine learning algorithms’ average performance achieved an F1 score of 94.37%. Two algorithms (Extra Trees and Random Forest) produced the top F1 metrics after the first round of modeling. After a series of tuning trials, the Extra Trees model turned in an F1 score of 97.39% using the training dataset. When we applied the Extra Tree model to the previously unseen test dataset, we obtained an F1 score of 55.55%.

In this Take2 iteration, the XGBoost algorithm achieved an F1 score of 96.48% using the training dataset. After a series of tuning trials, the XGBoost model turned in an F1 score of 98.38%. When we applied the XGBoost model to the previously unseen test dataset, we obtained an F1 score of 58.18%.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset. We should consider using the algorithm for further modeling.

Dataset Used: Company Bankruptcy Prediction Data Set

Dataset ML Model: Binary classification with numerical attributes

Dataset Reference: https://archive.ics.uci.edu/ml/datasets/Taiwanese+Bankruptcy+Prediction

One potential source of performance benchmark: https://www.kaggle.com/fedesoriano/company-bankruptcy-prediction

The HTML formatted report can be found here on GitHub.

Multi-Class Classification Model for Sign Language MNIST Using Python and XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Sign Language MNIST dataset is a multi-class classification situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: The original MNIST image dataset of handwritten digits is a popular benchmark for image-based machine learning methods. The Sign Language MNIST is presented here and follows the same CSV format with labels and pixel values in single rows to stimulate the community to develop more drop-in replacements. The American Sign Language letter database of hand gestures represent a multi-class problem with 24 classes of letters (excluding J and Z, which require motion).

The dataset format is patterned to match closely with the classic MNIST. Each training and test case represents a label (0-25) as a one-to-one map for each alphabetic letter A-Z (and no cases for 9=J or 25=Z because of gesture motions). The training data (27,455 cases) and test data (7172 instances) are approximately half the size of the standard MNIST but otherwise similar with a header row of the labels, pixel1,pixel2….pixel784 which represent a single 28×28 pixel image with grayscale values between 0-255. The original hand gesture image data represented multiple users repeating the gesture against different backgrounds.

ANALYSIS: The performance of the preliminary XGBoost model achieved an accuracy benchmark of 95.41%. After a series of tuning trials, the best XGBoost model processed the training dataset with an accuracy score of 99.68%. When we applied the final model to the previously unseen test dataset, we obtained an accuracy score of 78.93%, which pointed to a high variance error.

CONCLUSION: In this iteration, the XGBoost model did not appear to be suitable for modeling this dataset. We should consider experimenting another algorithm with this dataset.

Dataset Used: Sign Language MNIST Data Set

Dataset ML Model: Multi-Class classification with numerical attributes

Dataset Reference: https://www.kaggle.com/datamunge/sign-language-mnist

One source of potential performance benchmarks: https://www.kaggle.com/datamunge/sign-language-mnist

The HTML formatted report can be found here on GitHub.

XGBoost Machine Learning Templates v2 for Python

As I work on practicing and solving machine learning (ML) problems, I repeatedly find myself duplicating a set of steps and activities.

Thanks to Dr. Jason Brownlee’s suggestions on creating a machine learning template, I have pulled together a set of project templates that I use to experiment with modeling ML problems using Python and XGBoost.

Version 2 of the XGBoost templates contain minor adjustments and corrections to the prevision version of the template. The updated templates also include:

  • Scikit-learn’s ColumnTransformer, imputing, and pipeline utilities for feature scaling and transformation tasks

You will find the Python templates on the Machine Learning Project Templates page.

Binary Classification Model for Credit Card Default Using Python and XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The purpose of this project is to construct a predictive model using various machine learning algorithms and to document the end-to-end steps using a template. The Credit Card Default dataset is a binary classification situation where we attempt to predict one of the two possible outcomes.

INTRODUCTION: This dataset contains information on default payments, demographic factors, credit data, payment history, and bill statements of credit card clients in Taiwan from April 2005 to September 2005.

ANALYSIS: The baseline performance of the XGBoost algorithm achieved an accuracy benchmark of 81.44%. After a series of tuning trials, the XGBoost model processed the training dataset with an accuracy score of 82.20%. When we applied the XGBoost algorithm to the previously unseen test dataset, we obtained an accuracy score of 81.81%.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset. We should consider using the algorithm for further modeling.

Dataset Used: Default of Credit Card Clients Dataset

Dataset ML Model: Binary classification with numerical and categorical attributes

Dataset Reference: https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients

The HTML formatted report can be found here on GitHub.

Regression Model for Superconductor Critical Temperature Using XGBoost Take 2

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The purpose of this project is to construct a predictive model using various machine learning algorithms and to document the end-to-end steps using a template. The Superconductor Critical Temperature dataset is a regression situation where we are trying to predict the value of a continuous variable.

INTRODUCTION: The research team wishes to create a statistical model for predicting the superconducting critical temperature based on the features extracted from the superconductor’s chemical formula. The model seeks to examine the features that can contribute the most to the model’s predictive accuracy.

From previous iterations, we constructed and tuned several classic machine learning models using the Scikit-Learn library. We also observed the best results that we could obtain from the models.

From iteration Take1, we constructed and tuned an XGBoost model. Furthermore, we applied the XGBoost model to a test dataset and observed the best result that we could obtain from the model.

In this Take2 iteration, we will construct and tune an XGBoost model using the additional material attributes available for modeling. Furthermore, we will apply the XGBoost model to a test dataset and observe the best result that we can obtain from the model.

ANALYSIS: From previous iterations, the Extra Trees model turned in the best overall result and achieved an RMSE metric of 9.56. By using the optimized parameters, the Extra Trees algorithm processed the test dataset with an RMSE of 9.32.

From iteration Take1, the baseline performance of the XGBoost algorithm achieved an RMSE benchmark of 12.88. After a series of tuning trials, the XGBoost model processed the validation dataset with an RMSE score of 9.88. When we applied the XGBoost model to the previously unseen test dataset, we obtained an RMSE score of 9.06.

In this Take2 iteration, the baseline performance of the XGBoost algorithm achieved an RMSE benchmark of 12.54. After a series of tuning trials, the XGBoost model processed the validation dataset with an RMSE score of 9.58. When we applied the XGBoost model to the previously unseen test dataset, we obtained an RMSE score of 8.94.

CONCLUSION: In this iteration, the additional material attributes improved the XGBoost model further for modeling this dataset. We should consider using the algorithm for further modeling.

Dataset Used: Superconductivity Data Set

Dataset ML Model: Regression with numerical attributes

Dataset Reference: https://archive.ics.uci.edu/ml/datasets/Superconductivty+Data

One potential source of performance benchmarks: https://doi.org/10.1016/j.commatsci.2018.07.052

The HTML formatted report can be found here on GitHub.

Regression Model for Superconductor Critical Temperature Using XGBoost Take 1

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The purpose of this project is to construct a predictive model using various machine learning algorithms and to document the end-to-end steps using a template. The Superconductor Critical Temperature dataset is a regression situation where we are trying to predict the value of a continuous variable.

INTRODUCTION: The research team wishes to create a statistical model for predicting the superconducting critical temperature based on the features extracted from the superconductor’s chemical formula. The model seeks to examine the features that can contribute the most to the model’s predictive accuracy.

From previous iterations, we constructed and tuned several classic machine learning models using the Scikit-Learn library. We also observed the best results that we could obtain from the models.

In this Take1 iteration, we will construct and tune an XGBoost model. Furthermore, we will apply the XGBoost model to a test dataset and observe the best result that we can obtain from the model.

ANALYSIS: From previous iterations, the Extra Trees model turned in the best overall result and achieved an RMSE metric of 9.56. By using the optimized parameters, the Extra Trees algorithm processed the test dataset with an RMSE of 9.32.

In this Take1 iteration, the baseline performance of the XGBoost algorithm achieved an RMSE benchmark of 12.88. After a series of tuning trials, the XGBoost model processed the validation dataset with an RMSE score of 9.88. When we applied the XGBoost model to the previously unseen test dataset, we obtained an RMSE score of 9.06.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset. We should consider using the algorithm for further modeling.

Dataset Used: Superconductivity Data Set

Dataset ML Model: Regression with numerical attributes

Dataset Reference: https://archive.ics.uci.edu/ml/datasets/Superconductivty+Data

One potential source of performance benchmarks: https://doi.org/10.1016/j.commatsci.2018.07.052

The HTML formatted report can be found here on GitHub.

XGBoost Machine Learning Templates v1 for Python

As I work on practicing and solving machine learning (ML) problems, I find myself repeating a set of steps and activities repeatedly.

Thanks to Dr. Jason Brownlee’s suggestions on creating a machine learning template, I have pulled together a set of project templates that I use to experiment with modeling ML problems using Python and XGBoost.

Version 1 of the XGBoost templates contains structures and features that are similar to the Scikit-Learn templates. The XGBoost templates were designed to take a machine learning modeling exercise from beginning to end.

You will find the Python templates on the Machine Learning Project Templates page.