Multi-Class Tabular Classification Model for Avila Bible Identification Using Python and XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Avila Bible Identification dataset is a multi-class modeling situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: The Avila dataset includes 800 images extracted from the “Avila Bible,” a giant Latin copy of the whole Bible produced during the XII century between Italy and Spain. The paleographic analysis of the manuscript has identified the presence of 12 transcribers; however, each transcriber did not transcribe the same number of pages. The prediction task is to associate each pattern to one of the 12 transcribers labeled as A, B, C, D, E, F, G, H, I, W, X, and Y. The research team normalized the data using the Z-normalization method and divided the dataset into two portions, training and test. The training set contains 10,430 samples, while the test set contains 10,437 samples.

ANALYSIS: The performance of the preliminary XGBoost model achieved an accuracy benchmark of 86.67%. After a series of tuning trials, the final model processed the training dataset with an accuracy score of 99.79%. When we processed the test dataset with the final model, the model achieved an accuracy score of 99.81%.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Avila Bible Dataset

Dataset ML Model: Multi-Class classification with numerical features

Dataset Reference: https://archive-beta.ics.uci.edu/ml/datasets/avila

One source of potential performance benchmarks: https://www.sciencedirect.com/science/article/abs/pii/S0952197618300721

The HTML formatted report can be found here on GitHub.

Binary-Class Tabular Classification Model for Raisin Grains Identification Using Python and XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Raisin Grains Identification dataset is a binary-class modeling situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: In this study, the research team developed a computerized vision system to classify two different varieties of raisin grown in Turkey. The dataset contains the measurements for 900 raisin grain images. The image further broke down into seven major morphological features for each grain of raisin.

ANALYSIS: The performance of the preliminary XGBoost model achieved an accuracy benchmark of 85.92%. After a series of tuning trials, the final model processed the training dataset with an accuracy score of 86.17%. When we processed the test dataset with the final model, the model achieved an accuracy score of 86.66%.

CONCLUSION: In this iteration, the XGBoost model appeared to be suitable for modeling this dataset.

Dataset Used: Raisin Dataset

Dataset ML Model: Binary classification with numerical features

Dataset Reference: https://www.muratkoklu.com/datasets/

One source of potential performance benchmarks: https://doi.org/10.30855/gmbd.2020.03.03

The HTML formatted report can be found here on GitHub.

Binary-Class Tabular Classification Model for Rice Cammeo Osmancik Identification Using Python and XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Rice Cammeo Osmancik Identification dataset is a binary-class modeling situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: Rice is one of the most widely produced and consumed cereal crops globally. The crop is also the main sustenance for many countries because of its economic and nutritious nature. However, before rice reaches the consumers, it must go through many manufacturing steps such as cleaning, color sorting, and classification. In this study, the research team developed a computerized vision system to classify two proprietary rice species. The dataset contains the measurements for 3,810 rice grain images. The grain image broke down into seven major morphological features for each grain of rice.

ANALYSIS: The performance of the preliminary XGBoost model achieved an accuracy benchmark of 92.79%. After a series of tuning trials, the final model processed the training dataset with an accuracy score of 92.97%. When we processed the test dataset with the final model, the model achieved an accuracy score of 92.65%.

CONCLUSION: In this iteration, the XGBoost model appeared to be suitable for modeling this dataset.

Dataset Used: Rice Dataset Cammeo and Osmancik

Dataset ML Model: Binary classification with numerical features

Dataset Reference: https://www.muratkoklu.com/datasets/

One source of potential performance benchmarks: https://doi.org/10.18201/ijisae.2019355381

The HTML formatted report can be found here on GitHub.

Multi-Class Tabular Classification Model for Durum Wheat Identification Using Python and XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Durum Wheat Identification dataset is a multi-class modeling situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: Wheat is the main ingredient of most common food products in many people’s daily lives. Obtaining good quality wheat kernels is an essential matter for food supplies. In this study, the research team attempted to examine and classify type-1252 durum wheat kernels to obtain top-quality crops based on their vitreousness. The researchers used a total of 236 morphological, color, wavelet, and gaborlet features to classify durum wheat kernels and foreign objects by training several Artificial Neural Networks (ANNs) with different amounts of elements based on the feature rank list obtained with the ANOVA test.

ANALYSIS: The performance of the preliminary XGBoost model achieved an accuracy benchmark of 99.30%. After a series of tuning trials, the final model processed the training dataset with an accuracy score of 99.60%. When we processed the test dataset with the final model, the model achieved an accuracy score of 99.55%.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Durum Wheat Dataset

Dataset ML Model: Multi-Class classification with numerical features

Dataset Reference: https://www.muratkoklu.com/datasets/

One source of potential performance benchmarks: https://doi.org/10.1016/j.compag.2019.105016

The HTML formatted report can be found here on GitHub.

Binary-Class Model for Acoustic Extinguisher Fire Using Python and XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Acoustic Extinguisher Fire dataset is a binary-class modeling situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: Fire is a disaster that can have many different causes, and traditional fire extinguishing methods can be harmful to people. In this study, the research team tested a sound wave flame-extinguishing system to extinguish the flames at an early fire stage. The researchers conducted 17,442 extinguishing experiments using different flame sizes, frequencies, and distance ranges in their study. The goal is to create an environmentally friendly system with innovative extinguishing methods.

ANALYSIS: The performance of the preliminary XGBoost model achieved an accuracy benchmark of 97.74%. After a series of tuning trials, the final model processed the training dataset with an accuracy score of 97.86%. When we processed the test dataset with the final model, the model achieved an accuracy score of 98.58%.

CONCLUSION: In this iteration, the XGBoost model appeared to be suitable for modeling this dataset.

Dataset Used: Acoustic Extinguisher Fire Dataset

Dataset ML Model: Binary classification with numerical and categorical features

Dataset Reference: https://www.muratkoklu.com/datasets/

The HTML formatted report can be found here on GitHub.

Multi-Class Model for Kaggle Tabular Playground Series February 2022 Using XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Tabular Playground Series February 2022 dataset is a multi-class modeling situation where we are trying to predict one of several (more than two) possible outcomes.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions on Kaggle with fun but less complex, tabular datasets. For this dataset, we want to predict bacteria species based on repeated lossy measurements of DNA snippets. Each row of data contains a spectrum of histograms generated by repeated measurements of a sample, and each row contains the output of all 286 histogram possibilities.

ANALYSIS: The performance of the preliminary XGBoost model achieved an accuracy benchmark of 98.16%. After a series of tuning trials, the final model processed the training dataset with an accuracy score of 99.24%. When we processed the test dataset with the final model, the model achieved an accuracy score of 93.45%.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Tabular Playground Series February 2022

Dataset ML Model: Multi-Class classification with numerical attributes

Dataset Reference: https://www.kaggle.com/c/tabular-playground-series-feb-2022/data

One potential source of performance benchmark: https://www.kaggle.com/c/tabular-playground-series-feb-2022/leaderboard

The HTML formatted report can be found here on GitHub.

Multi-Class Model for Rice Varieties Identification Using Python and XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Rice Varieties Identification dataset is a multi-class modeling situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: The dataset owner collected 75,000 pieces of rice grain and created a dataset that classifies the grains into one of the varieties (Arborio, Basmati, Ipsala, Jasmine, Karacadag). The research team applied various preprocessing operations to the rice images and obtained the features. Each record contains 106 attributes, including 12 morphological features, four shape features, and 90 color features obtained from five different color spaces (RGB, HSV, Lab*, YCbCr, XYZ).

ANALYSIS: The performance of the preliminary XGBoost model achieved an accuracy benchmark of 99.88%. After a series of tuning trials, the final model processed the training dataset with an accuracy score of 99.90%. When we processed the test dataset with the final model, the model achieved an accuracy score of 99.87%.

CONCLUSION: In this iteration, XGBoost appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Rice MSC Dataset

Dataset ML Model: Multi-Class classification with numerical features

Dataset Reference: https://www.kaggle.com/mkoklu42/rice-msc-dataset

One source of potential performance benchmarks: https://www.kaggle.com/mkoklu42/rice-msc-dataset/code

The HTML formatted report can be found here on GitHub.

Binary-Class Model for Heart Disease Key Indicators Using XGboost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Fetal Health Classification dataset is a binary-class modeling situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: This dataset comes from the CDC’s Behavioral Risk Factor Surveillance System (BRFSS) study, which conducts annual telephone surveys to gather data on the health status of U.S. residents. The original dataset consists of 401,958 rows and 279 columns. However, the Kaggle project owner selected some of the most relevant attributes from the dataset and cleaned it up for machine learning projects.

ANALYSIS: The performance of the preliminary XGBoost model achieved a ROC-AUC benchmark of 92.68%. After a series of tuning trials, the final model processed the training dataset with a ROC-AUC score of 92.81%. When we processed the test dataset with the final model, the model achieved a ROC-AUC score of 72.25%.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Personal Key Indicators of Heart Disease Dataset

Dataset ML Model: Binary classification with numerical and categorical features

Dataset Reference: https://www.kaggle.com/kamilpytlak/personal-key-indicators-of-heart-disease

One source of potential performance benchmarks: https://www.kaggle.com/kamilpytlak/personal-key-indicators-of-heart-disease/code

The HTML formatted report can be found here on GitHub.