Binary Class Image Classification Deep Learning Model for CycleGAN Cezanne vs. Photo Using TensorFlow Take 3

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The CycleGAN Cezanne vs. Photo dataset is a binary classification situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: The CycleGAN dataset collection contains images from two classes A and B (for example, apple vs. orange, horses vs. zebras, and so on). The researchers used the images to train machine learning models for research work in General Adversarial Networks (GAN).

In this iteration, we will construct a CNN model based on the DenseNet121 architecture to make predictions.

ANALYSIS: In this iteration, the DenseNet121 model’s performance achieved an accuracy score of 99.80% after ten epochs using the training dataset. The same model processed the validation dataset with an accuracy measurement of 98.90%. Finally, the final model processed the test dataset with an accuracy score of 99.87%.

CONCLUSION: In this iteration, the DenseNet121-based CNN model appeared to be suitable for modeling this dataset. We should consider experimenting with TensorFlow for further modeling.

Dataset Used: CycleGAN Cezanne vs. Photo Dataset

Dataset ML Model: Binary classification with numerical attributes

Dataset Reference: https://people.eecs.berkeley.edu/%7Etaesung_park/CycleGAN/datasets/

One potential source of performance benchmarks: https://arxiv.org/abs/1703.10593 or https://junyanz.github.io/CycleGAN/

The HTML formatted report can be found here on GitHub.

Binary Class Image Classification Deep Learning Model for CycleGAN Cezanne vs. Photo Using TensorFlow Take 2

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The CycleGAN Cezanne vs. Photo dataset is a binary classification situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: The CycleGAN dataset collection contains images from two classes A and B (for example, apple vs. orange, horses vs. zebras, and so on). The researchers used the images to train machine learning models for research work in General Adversarial Networks (GAN).

In this iteration, we will construct a CNN model based on the ResNet50V2 architecture to make predictions.

ANALYSIS: In this iteration, the ResNet50V2 model’s performance achieved an accuracy score of 99.49% after ten epochs using the training dataset. The same model processed the validation dataset with an accuracy measurement of 98.53%. Finally, the final model processed the test dataset with an accuracy score of 98.51%.

CONCLUSION: In this iteration, the ResNet50V2-based CNN model appeared to be suitable for modeling this dataset. We should consider experimenting with TensorFlow for further modeling.

Dataset Used: CycleGAN Cezanne vs. Photo Dataset

Dataset ML Model: Binary classification with numerical attributes

Dataset Reference: https://people.eecs.berkeley.edu/%7Etaesung_park/CycleGAN/datasets/

One potential source of performance benchmarks: https://arxiv.org/abs/1703.10593 or https://junyanz.github.io/CycleGAN/

The HTML formatted report can be found here on GitHub.

Binary Class Image Classification Deep Learning Model for CycleGAN Cezanne vs. Photo Using TensorFlow Take 1

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The CycleGAN Cezanne vs. Photo dataset is a binary classification situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: The CycleGAN dataset collection contains images from two classes A and B (for example, apple vs. orange, horses vs. zebras, and so on). The researchers used the images to train machine learning models for research work in General Adversarial Networks (GAN).

In this iteration, we will construct a CNN model based on the InceptionV3 architecture to make predictions.

ANALYSIS: In this iteration, the InceptionV3 model’s performance achieved an accuracy score of 99.65% after ten epochs using the training dataset. The same model processed the validation dataset with an accuracy measurement of 98.24%. Finally, the final model processed the test dataset with an accuracy score of 99.75%.

CONCLUSION: In this iteration, the InceptionV3-based CNN model appeared to be suitable for modeling this dataset. We should consider experimenting with TensorFlow for further modeling.

Dataset Used: CycleGAN Cezanne vs. Photo Dataset

Dataset ML Model: Binary classification with numerical attributes

Dataset Reference: https://people.eecs.berkeley.edu/%7Etaesung_park/CycleGAN/datasets/

One potential source of performance benchmarks: https://arxiv.org/abs/1703.10593 or https://junyanz.github.io/CycleGAN/

The HTML formatted report can be found here on GitHub.

Binary Class Image Classification Deep Learning Model for CycleGAN Monet vs. Photo Using TensorFlow Take 4

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The CycleGAN Monet vs. Photo dataset is a binary classification situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: The CycleGAN dataset collection contains images from two classes A and B (for example, apple vs. orange, horses vs. zebras, and so on). The researchers used the images to train machine learning models for research work in General Adversarial Networks (GAN).

In this iteration, we will construct a CNN model based on the VGG16 architecture to make predictions.

ANALYSIS: In this iteration, the VGG16 model’s performance achieved an accuracy score of 96.81% after ten epochs using the training dataset. The same model processed the validation dataset with an accuracy measurement of 94.49%. Finally, the final model processed the test dataset with an accuracy score of 96.56%.

CONCLUSION: In this iteration, the VGG16-based CNN model appeared to be suitable for modeling this dataset. We should consider experimenting with TensorFlow for further modeling.

Dataset Used: CycleGAN Monet vs. Photo Dataset

Dataset ML Model: Binary classification with numerical attributes

Dataset Reference: https://people.eecs.berkeley.edu/%7Etaesung_park/CycleGAN/datasets/

One potential source of performance benchmarks: https://arxiv.org/abs/1703.10593 or https://junyanz.github.io/CycleGAN/

The HTML formatted report can be found here on GitHub.

Binary Class Image Classification Deep Learning Model for CycleGAN Monet vs. Photo Using TensorFlow Take 3

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The CycleGAN Monet vs. Photo dataset is a binary classification situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: The CycleGAN dataset collection contains images from two classes A and B (for example, apple vs. orange, horses vs. zebras, and so on). The researchers used the images to train machine learning models for research work in General Adversarial Networks (GAN).

In this iteration, we will construct a CNN model based on the DenseNet121 architecture to make predictions.

ANALYSIS: In this iteration, the DenseNet121 model’s performance achieved an accuracy score of 99.30% after ten epochs using the training dataset. The same model processed the validation dataset with an accuracy measurement of 96.53%. Finally, the final model processed the test dataset with an accuracy score of 95.64%.

CONCLUSION: In this iteration, the DenseNet121-based CNN model appeared to be suitable for modeling this dataset. We should consider experimenting with TensorFlow for further modeling.

Dataset Used: CycleGAN Monet vs. Photo Dataset

Dataset ML Model: Binary classification with numerical attributes

Dataset Reference: https://people.eecs.berkeley.edu/%7Etaesung_park/CycleGAN/datasets/

One potential source of performance benchmarks: https://arxiv.org/abs/1703.10593 or https://junyanz.github.io/CycleGAN/

The HTML formatted report can be found here on GitHub.

Algorithmic Trading Model using Stochastic RSI with Different Signal Levels

NOTE: This script is for learning purposes only and does not constitute a recommendation for buying or selling any stock mentioned in this script.

SUMMARY: This project aims to construct and test an algorithmic trading model and document the end-to-end steps using a template.

INTRODUCTION: This algorithmic trading model employs a simple mean-reversion strategy using the Stochastic RSI (StochRSI) indicators for stock position entries and exits. For the Stochastic RSI indicator, the model will use a 14 look-back period. The model will initiate a long position when the indicator crosses the lower signal line from above. Conversely, the model will exit the long position when the indicator crosses the upper signal line from below.

ANALYSIS: In this modeling iteration, we analyzed ten stocks between August 1, 2016, and September 10, 2021. The models’ performance appeared at the end of the script. The models with the wider signal line width generally produced a better return for the tested stocks. Moreover, the simple buy-and-hold approach came out ahead for all stocks.

CONCLUSION: For most stocks during the modeling time frame, the long-only trading strategy with the Stochastic RSI signals did not produce a better return than the buy-and-hold approach. We should consider modeling these stocks further by experimenting with more variations of the strategy.

CONCLUSION: For most stocks during the modeling time frame, the long-only trading strategy with the Stochastic Oscillator signals did not produce a better return than the buy-and-hold approach. We should consider modeling these stocks further by experimenting with more variations of the strategy.

Dataset ML Model: Time series analysis with numerical attributes

Dataset Used: Quandl

The HTML formatted report can be found here on GitHub.

Binary Class Image Classification Deep Learning Model for CycleGAN Monet vs. Photo Using TensorFlow Take 2

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The CycleGAN Monet vs. Photo dataset is a binary classification situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: The CycleGAN dataset collection contains images from two classes A and B (for example, apple vs. orange, horses vs. zebras, and so on). The researchers used the images to train machine learning models for research work in General Adversarial Networks (GAN).

In this iteration, we will construct a CNN model based on the ResNet50V2 architecture to make predictions.

ANALYSIS: In this iteration, the ResNet50V2 model’s performance achieved an accuracy score of 99.08% after ten epochs using the training dataset. The same model processed the validation dataset with an accuracy measurement of 97.96%. Finally, the final model processed the test dataset with an accuracy score of 95.87%.

CONCLUSION: In this iteration, the ResNet50V2-based CNN model appeared to be suitable for modeling this dataset. We should consider experimenting with TensorFlow for further modeling.

Dataset Used: CycleGAN Monet vs. Photo Dataset

Dataset ML Model: Binary classification with numerical attributes

Dataset Reference: https://people.eecs.berkeley.edu/%7Etaesung_park/CycleGAN/datasets/

One potential source of performance benchmarks: https://arxiv.org/abs/1703.10593 or https://junyanz.github.io/CycleGAN/

The HTML formatted report can be found here on GitHub.

Binary Class Image Classification Deep Learning Model for CycleGAN Monet vs. Photo Using TensorFlow Take 1

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The CycleGAN Monet vs. Photo dataset is a binary classification situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: The CycleGAN dataset collection contains images from two classes A and B (for example, apple vs. orange, horses vs. zebras, and so on). The researchers used the images to train machine learning models for research work in General Adversarial Networks (GAN).

In this iteration, we will construct a CNN model based on the InceptionV3 architecture to make predictions.

ANALYSIS: In this iteration, the InceptionV3 model’s performance achieved an accuracy score of 99.54% after ten epochs using the training dataset. The same model processed the validation dataset with an accuracy measurement of 97.89%. Finally, the final model processed the test dataset with an accuracy score of 98.62%.

CONCLUSION: In this iteration, the InceptionV3-based CNN model appeared to be suitable for modeling this dataset. We should consider experimenting with TensorFlow for further modeling.

Dataset Used: CycleGAN Monet vs. Photo Dataset

Dataset ML Model: Binary classification with numerical attributes

Dataset Reference: https://people.eecs.berkeley.edu/%7Etaesung_park/CycleGAN/datasets/

One potential source of performance benchmarks: https://arxiv.org/abs/1703.10593 or https://junyanz.github.io/CycleGAN/

The HTML formatted report can be found here on GitHub.