Binary Classification Model for Rain in Australia Using TensorFlow Take 4

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The purpose of this project is to construct a predictive model using various machine learning algorithms and to document the end-to-end steps using a template. The Rain in Australia dataset is a binary classification situation where we are trying to predict one of the two possible outcomes.

INTRODUCTION: This dataset contains daily weather observations from numerous Australian weather stations. The target variable RainTomorrow represents whether it rained the next day. We also should exclude the variable Risk-MM when training a binary classification model. By not eliminating the Risk-MM feature, we run a risk of leaking the answers into our model and reduce its effectiveness.

In iteration Take1, we constructed several traditional machine learning models using the linear, non-linear, and ensemble techniques. We also observed the best accuracy score that we could obtain with each of these models.

In iteration Take2, we constructed and tuned an XGBoost machine learning model for this dataset. We also observed the best accuracy score that we could obtain with the XGBoost model.

In iteration Take3, we constructed several Multilayer Perceptron (MLP) models with one, two, and three hidden layers. The one-layer MLP model serves as the baseline models as we build more complex MLP models in future iterations.

In this Take4 iteration, we will tune the single-layer MLP model and see whether we can improve our accuracy score.

ANALYSIS: In iteration Take1, the baseline performance of the machine learning algorithms achieved an average accuracy of 83.83%. Two algorithms (Extra Trees and Random Forest) achieved the top accuracy metrics after the first round of modeling. After a series of tuning trials, Random Forest turned in a better overall result than Extra Trees with a lower variance. Random Forest achieved an accuracy metric of 85.44%. When configured with the optimized parameters, the Random Forest algorithm processed the test dataset with an accuracy of 85.52%, which was consistent with the accuracy score from the training phase.

In iteration Take2, the XGBoost algorithm achieved a baseline accuracy of 84.69% by setting n_estimators to the default value of 100. After a series of tuning trials, XGBoost turned in an overall accuracy result of 86.21% with the n_estimators value set to 1000. When we apply the tuned XGBoost model to the test dataset, we obtained an accuracy score of 86.27%, which was consistent with the model performance from the training phase.

In iteration Take3, all one-layer models achieved an accuracy performance of around 86%. The eight-nodes model appears to overfit the least, when compared with models with 12, 16, and 20 nodes. The single-layer eight-nodes model also seems to work better than the two and three-layer models by processing the test dataset with an accuracy score of 86.10% after 20 epochs.

In this Take4 iteration, all models achieved an accuracy performance of around 86%. The model with the RMSprop optimizer appears to have the best accuracy score when predicting with the test dataset. It processed the test dataset with an accuracy score of 86.23% after 20 epochs.

CONCLUSION: For this iteration, the single-layer eight-nodes MLP model produced the accuracy score that is comparable to the XGBoost model. For this dataset, we should consider doing more tuning with the XGBoost and the MLP models.

Dataset Used: Rain in Australia Data Set

Dataset ML Model: Binary classification with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/jsphyg/weather-dataset-rattle-package

One potential source of performance benchmark: https://www.kaggle.com/jsphyg/weather-dataset-rattle-package/kernels

The HTML formatted report can be found here on GitHub.

Binary Classification Model for Rain in Australia Using TensorFlow Take 3

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The purpose of this project is to construct a predictive model using various machine learning algorithms and to document the end-to-end steps using a template. The Rain in Australia dataset is a binary classification situation where we are trying to predict one of the two possible outcomes.

INTRODUCTION: This dataset contains daily weather observations from numerous Australian weather stations. The target variable RainTomorrow represents whether it rained the next day. We also should exclude the variable Risk-MM when training a binary classification model. By not eliminating the Risk-MM feature, we run a risk of leaking the answers into our model and reduce its effectiveness.

In iteration Take1, we constructed several traditional machine learning models using the linear, non-linear, and ensemble techniques. We also observed the best accuracy score that we could obtain with each of these models.

In iteration Take2, we constructed and tuned an XGBoost machine learning model for this dataset. We also observed the best accuracy score that we could obtain with the XGBoost model.

In this Take3 iteration, we will construct several Multilayer Perceptron (MLP) models with one, two, and three hidden layers. These simple MLP models will serve as the baseline models as we build more complex MLP models in future iterations.

ANALYSIS: In iteration Take1, the baseline performance of the machine learning algorithms achieved an average accuracy of 83.83%. Two algorithms (Extra Trees and Random Forest) achieved the top accuracy metrics after the first round of modeling. After a series of tuning trials, Random Forest turned in a better overall result than Extra Trees with a lower variance. Random Forest achieved an accuracy metric of 85.44%. When configured with the optimized parameters, the Random Forest algorithm processed the test dataset with an accuracy of 85.52%, which was consistent with the accuracy score from the training phase.

In iteration Take2, the XGBoost algorithm achieved a baseline accuracy of 84.69% by setting n_estimators to the default value of 100. After a series of tuning trials, XGBoost turned in an overall accuracy result of 86.21% with the n_estimators value set to 1000. When we apply the tuned XGBoost model to the test dataset, we obtained an accuracy score of 86.27%, which was consistent with the model performance from the training phase.

In this Take3 iteration, all one-layer models achieved an accuracy performance of around 86%. The eight-nodes model appears to overfit the least, when compared with models with 12, 16, and 20 nodes. The single-layer eight-nodes model also seems to work better than the two and three-layer models by processing the test dataset with an accuracy score of 86.10% after 20 epochs.

CONCLUSION: For this iteration, the single-layer eight-nodes MLP model produced the accuracy score that is comparable to the XGBoost model. For this dataset, we should consider doing more tuning with the XGBoost and the MLP models.

Dataset Used: Rain in Australia Data Set

Dataset ML Model: Binary classification with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/jsphyg/weather-dataset-rattle-package

One potential source of performance benchmark: https://www.kaggle.com/jsphyg/weather-dataset-rattle-package/kernels

The HTML formatted report can be found here on GitHub.

Kathy Sierra on Making Users Awesome, Part 7

In the book, Badass: Making Users Awesome, Kathy Sierra analyzed and discussed the new ways of thinking about designing and sustaining successful products and services.

These are some of my takeaways from reading the book.

In this section, Kathy continues the discussion on how to help our users keep wanting to get better at a skill. We can help them move forward with two approaches.

The first approach is to remove the blocks to their progress. The second approach is to examine the elements that can pull the user forward.

To help users stayed motivated, we need to give them two things: progress and payoff.

We know what to do with managing the progress. What can we do about the payoff?

Kathy suggests that we need to provide ideas and tools to help users use their current skills early and often.

By asking the question, “What can they do within the first 30 minutes?” we seek to lower the initial threshold for “user-does-something-meaningful.”

However, fear can derail users before they start. If we want the users to feel powerful early, we need to anticipate and compensate for anything that keeps them from experimenting.

We can give users the ability to try things and provide them the information and tools to recover from their experiments without breaking anything.

The ideal user path is a continuous series of loops, each with a motivating “next superpower” goal, skill-building work with exposure-to-good-examples, followed by a payoff.

The best payoff of all is those intrinsically rewarding experiences when the users celebrate the experience reward for its own sake. Two kinds of intrinsic motivation can be powerful.

The first kind is the “High Resolution,” where the users develop an appreciation for increasingly more subtle details when others cannot perceive.

The second kind is the “Flow” where the users are so fully absorbed in a stimulating and challenging activity that they lose the sense of time.

The users need to reach those high-payoff goals for themselves, but we can give them some tips and tricks for the domain to help them get there faster.

The tips and tricks are not convenient, cut-the-corner short-cuts. They are about helping the users bypass the unnecessarily long way. We do not want our users to spend too much time reinforcing beginner skills. We need to help them to make progress on their paths continually.

當在混亂的時侯

(從我一個尊敬的作家,賽斯·高汀

我們有兩個選擇:

一條路是我們可以參加這壓力,噪音和瘋狂,並使之更加混亂。這就是傳播混亂的方式。在感覺上像是應該所做的事,去加入焦慮,但事實並非如此。實際上,這種焦慮並不能幫助任何人,可能會使一些真正有需要的人感到更困難。如果有人需要,請伸手幫個忙。但是如果不是這樣,放大混亂後會變得更糟,應該考慮採用其他方法。

另一條路是及時花一些時間來深入謀略並找出下一步該如何進行。在每次當市場給打斷期間,都會有人開始建立新的市場。在職業調整中,建立了新的職業。

學習的法力在於是你來決定的。甚至在混亂消退之後也是如此。

Binary Classification Model for Rain in Australia Using Python Take 2

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The purpose of this project is to construct a predictive model using various machine learning algorithms and to document the end-to-end steps using a template. The Rain in Australia dataset is a binary classification situation where we are trying to predict one of the two possible outcomes.

INTRODUCTION: This dataset contains daily weather observations from numerous Australian weather stations. The target variable RainTomorrow represents whether it rained the next day. We also should exclude the variable Risk-MM when training a binary classification model. By not eliminating the Risk-MM feature, we run a risk of leaking the answers into our model and reduce its effectiveness.

In iteration Take1, we constructed several traditional machine learning models using the linear, non-linear, and ensemble techniques. We also observed the best accuracy score that we could obtain with each of these models.

In this Take2 iteration, we will construct and tune an XGBoost machine learning model for this dataset. We will observe the best accuracy score that we can obtain with the XGBoost model.

ANALYSIS: In iteration Take1, the baseline performance of the machine learning algorithms achieved an average accuracy of 83.83%. Two algorithms (Extra Trees and Random Forest) achieved the top accuracy metrics after the first round of modeling. After a series of tuning trials, Random Forest turned in a better overall result than Extra Trees with a lower variance. Random Forest achieved an accuracy metric of 85.44%. When configured with the optimized parameters, the Random Forest algorithm processed the test dataset with an accuracy of 85.52%, which was consistent with the accuracy score from the training phase.

In this Take2 iteration, the XGBoost algorithm achieved a baseline accuracy of 84.69% by setting n_estimators to the default value of 100. After a series of tuning trials, XGBoost turned in an overall accuracy result of 86.21% with the n_estimators value set to 1000. When we apply the tuned XGBoost model to the test dataset, we obtained an accuracy score of 86.27%, which was consistent with the model performance from the training phase.

CONCLUSION: For this iteration, the XGBoost algorithm achieved the best overall result using the training and test datasets. For this dataset, XGBoost should be considered for further modeling.

Dataset Used: Rain in Australia Data Set

Dataset ML Model: Binary classification with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/jsphyg/weather-dataset-rattle-package

One potential source of performance benchmark: https://www.kaggle.com/jsphyg/weather-dataset-rattle-package/kernels

The HTML formatted report can be found here on GitHub.

Binary Classification Model for Rain in Australia Using Python Take 1

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The purpose of this project is to construct a predictive model using various machine learning algorithms and to document the end-to-end steps using a template. The Rain in Australia dataset is a binary classification situation where we are trying to predict one of the two possible outcomes.

INTRODUCTION: This dataset contains daily weather observations from numerous Australian weather stations. The target variable RainTomorrow represents whether it rained the next day. We also should exclude the variable Risk-MM when training a binary classification model. By not eliminating the Risk-MM feature, we run a risk of leaking the answers into our model and reduce its effectiveness.

ANALYSIS: The baseline performance of the machine learning algorithms achieved an average accuracy of 78.75%. Two algorithms (Extra Trees and Random Forest) achieved the top accuracy metrics after the first round of modeling. After a series of tuning trials, Random Forest turned in a better overall result than Extra Trees with a lower variance. Random Forest achieved an accuracy metric of 85.44%. When configured with the optimized parameters, the Random Forest algorithm processed the test dataset with an accuracy of 85.52%, which was consistent with the accuracy score from the training phase.

CONCLUSION: For this iteration, the Random Forest algorithm achieved the best overall results using the training and test datasets. For this dataset, Random Forest should be considered for further modeling.

Dataset Used: Rain in Australia Data Set

Dataset ML Model: Binary classification with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/jsphyg/weather-dataset-rattle-package

One potential source of performance benchmark: https://www.kaggle.com/jsphyg/weather-dataset-rattle-package/kernels

The HTML formatted report can be found here on GitHub.

Time Series Model for Vehicle Miles Traveled Using Python

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The purpose of this project is to construct a time series prediction model and document the end-to-end steps using a template. The Vehicle Miles Traveled dataset is a time series situation where we are trying to forecast future outcomes based on past data points.

INTRODUCTION: The problem is to forecast the monthly number of vehicle miles traveled in the United States. The dataset describes a time-series of miles (in millions) over 20 years (2000-2019), and there are 239 observations. We used the first 80% of the observations for training various models while holding back the remaining observations for validating the final model.

ANALYSIS: The baseline prediction (or persistence) for the dataset resulted in an RMSE of 18139. After performing a grid search for the most optimal ARIMA parameters, the final ARIMA non-seasonal order was (0, 1, 0) with the seasonal order being (0, 1, 0, 12). Furthermore, the chosen model processed the validation data with an RMSE of 1856, which was significantly better than the baseline model as expected.

CONCLUSION: For this dataset, the chosen ARIMA model achieved a satisfactory result and should be considered for further modeling.

Dataset Used: Vehicle Miles Traveled

Dataset ML Model: Time series forecast with numerical attributes

Dataset Reference: https://fred.stlouisfed.org/series/VMT

The HTML formatted report can be found here on GitHub.

Binary Classification Model for Springleaf Marketing Response Using TensorFlow Take 6

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The purpose of this project is to construct a predictive model using various machine learning algorithms and to document the end-to-end steps using a template. The Springleaf Marketing Response dataset is a binary classification situation where we are trying to predict one of the two possible outcomes.

INTRODUCTION: Springleaf leverages the direct mail method for connecting with customers who may need a loan. To improve their targeting efforts, Springleaf must be sure they are focusing on the customers who are likely to respond and be good candidates for their services. Using a dataset with a broad set of anonymized features, Springleaf is looking to predict which customers will respond to a direct mail offer.

In iteration Take1, we constructed several traditional machine learning models using the linear, non-linear, and ensemble techniques. We also observed the best ROC-AUC result that we could obtain with each of these models.

In iteration Take2, we constructed and tuned an XGBoost machine learning model for this dataset. We also observed the best ROC-AUC result that we could obtain with the XGBoost model.

In iteration Take3, we constructed several Multilayer Perceptron (MLP) models with one hidden layer of 64, 128, 256, 512, 1024, and 2048 nodes. These single-layer MLP models serve as the baseline models as we build more complex MLP models in future iterations.

In iteration Take4, we constructed several Multilayer Perceptron (MLP) models with two hidden layers. We also observed whether these two-layer MLP models could improve the AUC-ROC performance of the single-layer models.

In iteration Take5, we constructed several Multilayer Perceptron (MLP) models with three hidden layers. We also observed whether these three-layer MLP models could improve the AUC-ROC performance of the single-layer models.

In this Take6 iteration, we will construct several Multilayer Perceptron (MLP) models with four hidden layers. We will observe whether these four-layer MLP models can improve the AUC-ROC performance of the single-layer models.

ANALYSIS: In iteration Take1, the baseline performance of the machine learning algorithms achieved an average ROC-AUC of 70.42%. The Random Forest and Gradient Boosting Machine algorithms made the top ROC-AUC metrics after the first round of modeling. After a series of tuning trials, GBM turned in an overall ROC-AUC result of 77.96%. When we apply the tuned GBM algorithm to the test dataset, we obtained a ROC-AUC score of only 62.58%, which was much lower than the score from model training.

In iteration Take2, the XGBoost algorithm achieved a baseline ROC-AUC performance of 77.08%. After a series of tuning trials, XGBoost turned in an overall best ROC-AUC result of 78.23%. When we apply the tuned XGBoost algorithm to the test dataset, we obtained a ROC-AUC score of only 62.86%, which was much lower than the score from model training.

In iteration Take3, all one-layer models achieved a ROC-AUC performance of around 50%.

In iteration Take4, all two-layer models again achieved a ROC-AUC performance of around 50%.

In iteration Take5, all three-layer models once again achieved a ROC-AUC performance of around 50%.

In this Take6 iteration, all four-layer models once again achieved a ROC-AUC performance of around 50%.

CONCLUSION: For this iteration, all four-layer models scored poorly on the ROC-AUC performance. For this dataset, we should consider doing modeling with XGBoost or ensemble algorithm.

Dataset Used: Springleaf Marketing Response Data Set

Dataset ML Model: Binary classification with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/c/springleaf-marketing-response/data

One potential source of performance benchmark: https://www.kaggle.com/c/springleaf-marketing-response/leaderboard

The HTML formatted report can be found here on GitHub.