Regression Model for Kaggle Tabular Playground Series 2021 August Using Python and Scikit-learn

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Tabular Playground Series Aug 2021 dataset is a regression situation where we are trying to predict the value of a continuous variable.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions on Kaggle with fun but less complex, tabular datasets. The February dataset may be synthetic but is based on a real dataset and generated using a CTGAN. The original dataset tries to predict the loss from a loan default. Although the features are anonymized, they have properties relating to real-world features.

ANALYSIS: The average performance of the machine learning algorithms achieved an RMSE benchmark of 8.0771 using the training dataset. We selected ElasticNet and Gradient Boosting to perform the tuning exercises. After a series of tuning trials, the refined Gradient Boosting model processed the training dataset with a final RMSE score of 7.8563. When we processed Kaggle’s test dataset with the final model, the model achieved an RMSE score of 7.8416.

CONCLUSION: In this iteration, the Gradient Boosting model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Tabular Playground Series Aug 2021 Data Set

Dataset ML Model: Regression with numerical attributes

Dataset Reference: https://www.kaggle.com/c/tabular-playground-series-aug-2021

One potential source of performance benchmarks: https://www.kaggle.com/c/tabular-playground-series-aug-2021/leaderboard

The HTML formatted report can be found here on GitHub.

Regression Model for Kaggle Tabular Playground Series 2021 August Using Python and XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Tabular Playground Series Aug 2021 dataset is a regression situation where we are trying to predict the value of a continuous variable.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions on Kaggle with fun but less complex, tabular datasets. The February dataset may be synthetic but is based on a real dataset and generated using a CTGAN. The original dataset tries to predict the loss from a loan default. Although the features are anonymized, they have properties relating to real-world features.

ANALYSIS: The performance of the preliminary XGBoost model achieved an RMSE benchmark of 7.8834. After a series of tuning trials, the refined XGBoost model processed the training dataset with a final RMSE score of 7.8463. When we applied the last model to Kaggle’s test dataset, the model achieved an RMSE score of 7.8324.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Tabular Playground Series Aug 2021 Data Set

Dataset ML Model: Regression with numerical attributes

Dataset Reference: https://www.kaggle.com/c/tabular-playground-series-aug-2021

One potential source of performance benchmarks: https://www.kaggle.com/c/tabular-playground-series-aug-2021/leaderboard

The HTML formatted report can be found here on GitHub.

Binary Classification Model for Bondora P2P Lending Using Python and XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Bondora P2P Lending dataset is a binary classification situation where we attempt to predict one of the two possible outcomes.

INTRODUCTION: The Kaggle dataset owner retrieved this dataset from Bondora, a leading European peer-to-peer lending platform. The data comprises demographic and financial information of the borrowers with defaulted and non-defaulted loans between February 2009 and July 2021. For investors, “peer-to-peer lending” or “P2P” offers an attractive way to diversify portfolios and enhance long-term performance. However, to make effective decisions, investors want to minimize the risk of default of each lending decision and realize the return that compensates for the risk. Therefore, we will predict the default risk by focusing on the “DefaultDate” attribute as the target.

ANALYSIS: The performance of the preliminary XGBoost model achieved a ROC-AUC benchmark of 0.9712. After a series of tuning trials, the refined XGBoost model processed the training dataset with a final ROC-AUC score of 0.9849. When we applied the last model to Kaggle’s test dataset, the model achieved a ROC-AUC score of 0.9307.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Bondora P2P Lending Loan Data

Dataset ML Model: Binary classification with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/sid321axn/bondora-peer-to-peer-lending-loan-data

Dataset Attribute Description: https://www.bondora.com/en/public-reports

One potential source of performance benchmark: https://www.kaggle.com/sid321axn/bondora-peer-to-peer-lending-loan-data/code

The HTML formatted report can be found here on GitHub.

Binary Classification Model for Bondora P2P Lending Using Python and Scikit-Learn

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Bondora P2P Lending dataset is a binary classification situation where we attempt to predict one of the two possible outcomes.

INTRODUCTION: The Kaggle dataset owner retrieved this dataset from Bondora, a leading European peer-to-peer lending platform. The data comprises demographic and financial information of the borrowers with defaulted and non-defaulted loans between February 2009 and July 2021. For investors, “peer-to-peer lending” or “P2P” offers an attractive way to diversify portfolios and enhance long-term performance. However, to make effective decisions, investors want to minimize the risk of default of each lending decision and realize the return that compensates for the risk. Therefore, we will predict the default risk by focusing on the “DefaultDate” attribute as the target.

ANALYSIS: The average performance of the machine learning algorithms achieved a ROC-AUC benchmark of 0.9539 using the training dataset. We selected Random Forest and Extra Trees to perform the tuning exercises. After a series of tuning trials, the refined Extra Trees model processed the training dataset with a final ROC-AUC score of 0.9801. When we processed the test dataset with the final model, the model achieved a ROC-AUC score of 0.9162.

CONCLUSION: In this iteration, the Random Forest model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Bondora P2P Lending Loan Data

Dataset ML Model: Binary classification with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/sid321axn/bondora-peer-to-peer-lending-loan-data

Dataset Attribute Description: https://www.bondora.com/en/public-reports

One potential source of performance benchmark: https://www.kaggle.com/sid321axn/bondora-peer-to-peer-lending-loan-data/code

The HTML formatted report can be found here on GitHub.

Binary Classification Model for LendingClub Loan Data Using Python and XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle LendingClub Loan Data dataset is a binary classification situation where we attempt to predict one of the two possible outcomes.

INTRODUCTION: The Kaggle dataset owner derived this dataset from the publicly available data of LendingClub.com. Lending Club connects people who need money (borrowers) with people who have money (investors). An investor naturally would want to invest in people who showed a profile of having a high probability of paying back the loan. The dataset uses the lending data from 2007 to 2010, and we will try to predict whether the borrower paid back their loan in full.

ANALYSIS: The performance of the preliminary XGBoost model achieved a ROC-AUC benchmark of 0.8103. After a series of tuning trials, the refined XGBoost model processed the training dataset with a final ROC-AUC score of 0.8491. When we applied the last model to Kaggle’s test dataset, the model achieved a ROC-AUC score of 0.6039.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle LendingClub Loan Data

Dataset ML Model: Binary classification with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/itssuru/loan-data

One potential source of performance benchmark: https://www.kaggle.com/itssuru/loan-data/code

The HTML formatted report can be found here on GitHub.

Binary Classification Model for LendingClub Loan Data Using Python and Scikit-Learn

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle LendingClub Loan Data dataset is a binary classification situation where we attempt to predict one of the two possible outcomes.

INTRODUCTION: The Kaggle dataset owner derived this dataset from the publicly available data of LendingClub.com. Lending Club connects people who need money (borrowers) with people who have money (investors). An investor naturally would want to invest in people who showed a profile of having a high probability of paying back the loan. The dataset uses the lending data from 2007 to 2010, and we will try to predict whether the borrower paid back their loan in full.

ANALYSIS: The average performance of the machine learning algorithms achieved a ROC-AUC benchmark of 0.7824 using the training dataset. We selected Random Forest and Extra Trees to perform the tuning exercises. After a series of tuning trials, the refined Extra Trees model processed the training dataset with a final ROC-AUC score of 0.8914. When we processed the test dataset with the final model, the model achieved a ROC-AUC score of 0.6064.

CONCLUSION: In this iteration, the k-Nearest Neighbors model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle LendingClub Loan Data

Dataset ML Model: Binary classification with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/itssuru/loan-data

One potential source of performance benchmark: https://www.kaggle.com/itssuru/loan-data/code

The HTML formatted report can be found here on GitHub.

Multi-Class Model for Kaggle Tabular Playground Series 2021 June Using Python and XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Tabular Playground June 2021 dataset is a multi-class modeling situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions on Kaggle with fun but less complex, tabular datasets. The dataset used for this competition is synthetic but based on a real dataset and generated using a CTGAN. The original dataset deals with predicting the category on an eCommerce product given various attributes about the listing. Although the features are anonymized, they have properties relating to real-world features.

ANALYSIS: The performance of the preliminary XGBoost model achieved a logarithmic loss benchmark of 1.7534. After a series of tuning trials, the refined XGBoost model processed the training dataset with a final logarithmic loss score of 1.7497. When we applied the last model to Kaggle’s test dataset, the model achieved a logarithmic loss of 1.7483.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Tabular Playground 2021 June Data Set

Dataset ML Model: Multi-Class classification with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/c/tabular-playground-series-jun-2021/

One potential source of performance benchmark: https://www.kaggle.com/c/tabular-playground-series-jun-2021/leaderboard

The HTML formatted report can be found here on GitHub.

Multi-Class Model for Kaggle Tabular Playground Series 2021 June Using Python and Scikit-learn

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Tabular Playground June 2021 dataset is a multi-class modeling situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions on Kaggle with fun but less complex, tabular datasets. The dataset used for this competition is synthetic but based on a real dataset and generated using a CTGAN. The original dataset deals with predicting the category on an eCommerce product given various attributes about the listing. Although the features are anonymized, they have properties relating to real-world features.

ANALYSIS: The average performance of the machine learning algorithms achieved a logarithmic loss benchmark of 5.6058 using the training dataset. We selected Logistic Regression and Random Forest to perform the tuning exercises. After a series of tuning trials, the refined Random Forest model processed the training dataset with a final logarithmic loss score of 1.7700. When we processed Kaggle’s test dataset with the final model, the model achieved a logarithmic loss score of 1.7682.

CONCLUSION: In this iteration, the Random Forest model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Tabular Playground 2021 June Data Set

Dataset ML Model: Multi-Class classification with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/c/tabular-playground-series-jun-2021/

One potential source of performance benchmark: https://www.kaggle.com/c/tabular-playground-series-jun-2021/leaderboard

The HTML formatted report can be found here on GitHub.