Regression Model for Kaggle Tabular Playground Series 2021 Feb Using Python and AutoKeras

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Tabular Playground Series 2021 Feb dataset is a regression situation where we are trying to predict the value of a continuous variable.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions on Kaggle with fun but less complex, tabular datasets. The February dataset may be synthetic but is based on a real dataset and generated using a CTGAN. The original dataset tries to predict the amount of an insurance claim. Although the features are anonymized, they have properties relating to real-world features.

ANALYSIS:  The performance of the best, preliminary AutoKeras model achieved an RMSE benchmark of 0.8625. When we applied the final model to Kaggle’s test dataset, the model achieved an RMSE score of 0.8648.

CONCLUSION: In this iteration, the AutoKeras model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Tabular Playground Series 2021 Feb Data Set

Dataset ML Model: Regression with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/c/tabular-playground-series-feb-2021

One potential source of performance benchmarks: https://www.kaggle.com/c/tabular-playground-series-feb-2021/leaderboard

The HTML formatted report can be found here on GitHub.

Seth Godin’s Akimbo: Understanding Percentages

In his Akimbo podcast, Seth Godin teaches us how to adopt a posture of possibility, change the culture, and choose to make a difference. Here are my takeaways from the episode.

In this podcast, Seth discusses the percentages behind our public health events and why we need to interpret the math correctly. There are two crucial questions Seth wanted to dissect.

First, why should we bother with an intervention with a 90% success rate when 99% of the people recover eventually? That is an oversimplification of the 99% recovery rate and projects a false sense of security.

With a population of 300 million in the US, one percent of death from a disease means that approximately three million Americans will die. That is a considerable number, and if any other disease or natural disaster had a death toll number like that, everyone would be paying attention. 99% sounds like an excellent survival rate unless you are one of the 1%.

Another argument says that a vaccine’s 90% efficacy rate is not worth the effort if 99% of the people recover. When we compare two percentages like those, it might feel appropriate, but it is not. When we attempt to compare two percentage numbers, we need to dig deeper into the numbers.

While the 90% efficacy rate seems to be less impressive than the 99% recovery rate, they mean differently because they cover two overlapping but still different segments of the population. If 1% of the population faces a certain doom without the vaccine, the 90% efficacy rate can still make a marked difference. By vaccinating the people, we have an opportunity to save 2.7M (90% out of 3M population) people from certain death.

The second question has to do with why bother getting the second dose of the vaccine if it is only going to increase the efficacy by 15%, from 80% to 95%? Again, we like to do short-cut with the numbers, but that often leads to bias.

In a town of 1,000 people, an 80% efficacy rate means 200 people will get sick. If everyone gets the second vaccination shot and reaches an overall 95% efficacy rate, 50 people will probably get sick. When looking at the number from an individual perspective, the 15% difference rate might not seem like a lot. When we factor in a much larger population, even small percentages start to become significant.

While we may wish we had a perfect answer to our public health crisis right from the start, practicing public health always faces two significant obstacles. First, science does not look good when we look at it in real-time, and science is about failing and stumbling our way to getting it right. The second obstacle is that public health, by its nature, deals with vast numbers of people. Frequently, people who might not be us in any given situation over more extended periods.

It is easy to take public health for granted for those two reasons, but public health is still one of our modern triumphs. Public health has done many good things for a large number of people and deserves our support. We need not take the public health officials and scientists at their word, but the math speaks for itself.

競爭與激活

(從我一個尊敬的作家,賽斯·高汀

一個創新者通常很少會遇到競爭上的問題。 因為這挑戰不是您的客戶正在從另一位供應商那裡購買,而是他們沒有從任何人那裡購買過這類東西。

我們所做的工作和當我們尋求激發活力時所講的故事都與競爭的觀念截然不同,但是我們文化(體育,大眾商人,政治)的教訓都與競爭有關。

“我們比他們更好。”是一個競爭的口號。

這與“事情可能會更好”或“您錯過了這個新事物”或“您崇拜的人已經在使用它”的想法是有很大的不同。

如果您想發展,你就需要讓某人不僅決定您的創作值得他們花費時間和金錢,你還需要激勵他們去立即採取行動,而不是日後過久後才去想著去採取行動。

Regression Model for Kaggle Tabular Playground Series 2021 Feb Using Python and TensorFlow

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Tabular Playground Series 2021 Feb dataset is a regression situation where we are trying to predict the value of a continuous variable.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions on Kaggle with fun but less complex, tabular datasets. The February dataset may be synthetic but is based on a real dataset and generated using a CTGAN. The original dataset tries to predict the amount of an insurance claim. Although the features are anonymized, they have properties relating to real-world features.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions on Kaggle with fun but less complex, tabular datasets. The February dataset may be synthetic but is based on a real dataset and generated using a CTGAN. The original dataset tries to predict the amount of an insurance claim. Although the features are anonymized, they have properties relating to real-world features.

ANALYSIS: The performance of the cross-validated TensorFlow models achieved an average RMSE benchmark of 0.8642. When we applied the final model to Kaggle’s test dataset, the model achieved an RMSE score of 0.8642.

CONCLUSION: In this iteration, the TensorFlow model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Tabular Playground Series 2021 Feb Data Set

Dataset ML Model: Regression with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/c/tabular-playground-series-feb-2021

One potential source of performance benchmarks: https://www.kaggle.com/c/tabular-playground-series-feb-2021/leaderboard

The HTML formatted report can be found here on GitHub.

Regression Model for Kaggle Tabular Playground Series 2021 Feb Using Python and XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Tabular Playground Series 2021 Feb dataset is a regression situation where we are trying to predict the value of a continuous variable.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions on Kaggle with fun but less complex, tabular datasets. The February dataset may be synthetic but is based on a real dataset and generated using a CTGAN. The original dataset tries to predict the amount of an insurance claim. Although the features are anonymized, they have properties relating to real-world features.

ANALYSIS: The performance of the preliminary XGBoost model achieved an RMSE benchmark of 0.8531. After a series of tuning trials, the refined XGBoost model processed the training dataset with a final RMSE score of 0.8434. When we applied the last model to Kaggle’s test dataset, the model achieved an RMSE score of 0.8443.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Tabular Playground Series 2021 Feb Data Set

Dataset ML Model: Regression with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/c/tabular-playground-series-feb-2021

One potential source of performance benchmarks: https://www.kaggle.com/c/tabular-playground-series-feb-2021/leaderboard

The HTML formatted report can be found here on GitHub. [https://github.com/daines-analytics/tabular-data-projects/tree/master/py_regression_kaggle_tabular_playground_2021feb]

Regression Model for Kaggle Tabular Playground Series 2021 Feb Using Python and Scikit-learn

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Tabular Playground Series 2021 Feb dataset is a regression situation where we are trying to predict the value of a continuous variable.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions on Kaggle with fun but less complex, tabular datasets. The February dataset may be synthetic but is based on a real dataset and generated using a CTGAN. The original dataset tries to predict the amount of an insurance claim. Although the features are anonymized, they have properties relating to real-world features.

ANALYSIS: The average performance of the machine learning algorithms achieved an RMSE benchmark of 0.8790 using the training dataset. We selected Random Forest and Gradient Boosting to perform the tuning exercises. After a series of tuning trials, the refined Gradient Boosting model processed the training dataset with a final RMSE score of 0.8447. When we processed Kaggle’s test dataset with the final model, the model achieved an RMSE score of 0.8455.

CONCLUSION: In this iteration, the Gradient Boosting model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Tabular Playground Series 2021 Feb Data Set

Dataset ML Model: Regression with numerical attributes

Dataset Reference: https://www.kaggle.com/c/tabular-playground-series-feb-2021

One potential source of performance benchmarks: https://www.kaggle.com/c/tabular-playground-series-feb-2021/leaderboard

The HTML formatted report can be found here on GitHub.

Feature Selection for Kaggle Tabular Playground Series 2021 Jan Using Python and Scikit-learn

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: Feature selection involves picking the set of features that are most relevant to the target variable. This can help reduce the complexity of our model and minimize the resources required for training and inference. The Kaggle Tabular Playground Series Jan 2021 dataset is a regression situation where we are trying to predict the value of a continuous variable.

INTRODUCTION: In this notebook, we will run through the different techniques in performing feature selection on the dataset. We will leverage the Scikit-learn library, which features various machine learning algorithms and has built-in implementations of various feature selection methods. We will compare which method works best for this particular dataset.

ANALYSIS: The feature selection technique that yielded the best RMSE score was Recursive Feature Elimination (RFE). Its RMSE for the training dataset was 0.7082.

CONCLUSION: In this iteration, the RFE technique appeared to be suitable for modeling this dataset. We should follow up on the feature selection exercise by modeling the whole dataset using the selected attributes.

Dataset Used: Kaggle Tabular Playground Series 2021 Jan Data Set

Dataset ML Model: Regression with numerical attributes

Dataset Reference: https://www.kaggle.com/c/tabular-playground-series-jan-2021

One potential source of performance benchmarks: https://www.kaggle.com/c/tabular-playground-series-feb-2021/leaderboard

The HTML formatted report can be found here on GitHub.

Regression Model for Kaggle Tabular Playground Series 2021 Jan Using Python and AutoKeras

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Tabular Playground Series 2021 Jan dataset is a regression situation where we are trying to predict the value of a continuous variable.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have been hosting playground-style competitions on Kaggle with fun but less complex, tabular datasets. These competitions will be great for people looking for something between the Titanic Getting Started competition and a Featured competition.

ANALYSIS: The performance of the best, preliminary AutoKeras model achieved an RMSE benchmark of 0.7084. When we applied the final model to Kaggle’s test dataset, the model achieved an RMSE score of 0.7092.

CONCLUSION: In this iteration, the TensorFlow model from AutoKeras appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Tabular Playground Series 2021 Jan Data Set

Dataset ML Model: Regression with numerical attributes

Dataset Reference: https://www.kaggle.com/c/tabular-playground-series-jan-2021

One potential source of performance benchmarks: https://www.kaggle.com/c/tabular-playground-series-jan-2021/leaderboard

The HTML formatted report can be found here on GitHub.