Algorithmic Trading Model using Stochastic RSI with Different Signal Levels

NOTE: This script is for learning purposes only and does not constitute a recommendation for buying or selling any stock mentioned in this script.

SUMMARY: This project aims to construct and test an algorithmic trading model and document the end-to-end steps using a template.

INTRODUCTION: This algorithmic trading model employs a simple mean-reversion strategy using the Stochastic RSI (StochRSI) indicators for stock position entries and exits. For the Stochastic RSI indicator, the model will use a 14 look-back period. The model will initiate a long position when the indicator crosses the lower signal line from above. Conversely, the model will exit the long position when the indicator crosses the upper signal line from below.

ANALYSIS: In this modeling iteration, we analyzed ten stocks between August 1, 2016, and September 10, 2021. The models’ performance appeared at the end of the script. The models with the wider signal line width generally produced a better return for the tested stocks. Moreover, the simple buy-and-hold approach came out ahead for all stocks.

CONCLUSION: For most stocks during the modeling time frame, the long-only trading strategy with the Stochastic RSI signals did not produce a better return than the buy-and-hold approach. We should consider modeling these stocks further by experimenting with more variations of the strategy.

CONCLUSION: For most stocks during the modeling time frame, the long-only trading strategy with the Stochastic Oscillator signals did not produce a better return than the buy-and-hold approach. We should consider modeling these stocks further by experimenting with more variations of the strategy.

Dataset ML Model: Time series analysis with numerical attributes

Dataset Used: Quandl

The HTML formatted report can be found here on GitHub.

Binary Class Image Classification Deep Learning Model for CycleGAN Monet vs. Photo Using TensorFlow Take 2

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The CycleGAN Monet vs. Photo dataset is a binary classification situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: The CycleGAN dataset collection contains images from two classes A and B (for example, apple vs. orange, horses vs. zebras, and so on). The researchers used the images to train machine learning models for research work in General Adversarial Networks (GAN).

In this iteration, we will construct a CNN model based on the ResNet50V2 architecture to make predictions.

ANALYSIS: In this iteration, the ResNet50V2 model’s performance achieved an accuracy score of 99.08% after ten epochs using the training dataset. The same model processed the validation dataset with an accuracy measurement of 97.96%. Finally, the final model processed the test dataset with an accuracy score of 95.87%.

CONCLUSION: In this iteration, the ResNet50V2-based CNN model appeared to be suitable for modeling this dataset. We should consider experimenting with TensorFlow for further modeling.

Dataset Used: CycleGAN Monet vs. Photo Dataset

Dataset ML Model: Binary classification with numerical attributes

Dataset Reference: https://people.eecs.berkeley.edu/%7Etaesung_park/CycleGAN/datasets/

One potential source of performance benchmarks: https://arxiv.org/abs/1703.10593 or https://junyanz.github.io/CycleGAN/

The HTML formatted report can be found here on GitHub.

Binary Class Image Classification Deep Learning Model for CycleGAN Monet vs. Photo Using TensorFlow Take 1

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The CycleGAN Monet vs. Photo dataset is a binary classification situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: The CycleGAN dataset collection contains images from two classes A and B (for example, apple vs. orange, horses vs. zebras, and so on). The researchers used the images to train machine learning models for research work in General Adversarial Networks (GAN).

In this iteration, we will construct a CNN model based on the InceptionV3 architecture to make predictions.

ANALYSIS: In this iteration, the InceptionV3 model’s performance achieved an accuracy score of 99.54% after ten epochs using the training dataset. The same model processed the validation dataset with an accuracy measurement of 97.89%. Finally, the final model processed the test dataset with an accuracy score of 98.62%.

CONCLUSION: In this iteration, the InceptionV3-based CNN model appeared to be suitable for modeling this dataset. We should consider experimenting with TensorFlow for further modeling.

Dataset Used: CycleGAN Monet vs. Photo Dataset

Dataset ML Model: Binary classification with numerical attributes

Dataset Reference: https://people.eecs.berkeley.edu/%7Etaesung_park/CycleGAN/datasets/

One potential source of performance benchmarks: https://arxiv.org/abs/1703.10593 or https://junyanz.github.io/CycleGAN/

The HTML formatted report can be found here on GitHub.

Jeff Goins on Real Artists Don’t Starve, Part 5

In his book, Real Artists Don’t Starve: Timeless Strategies for Thriving in the New Creative Age, Jeff Goins discusses how we can apply prudent strategies in positioning ourselves for thriving in our chosen field of craft.

These are some of my favorite concepts and takeaways from reading the book.

Chapter 5, Cultivate Patrons

In this chapter, Jeff discusses the importance of cultivating patrons who can support our work and succeed with us. He offers the following recommendations for us to think about:

  • In creative work, quality is subjective. Subjectivity means that not only we must practice, but also we need patrons for our work. A patron is an advocate who sees our potential and believes in our work. Support from a patron needs not to be just financial. It could be someone who gives us a chance or maybe connects us to the right contacts.
  • Patrons might not be wealthy connoisseurs or influential leaders. They are people who are willing to help to see our work succeed. It is also our job to recognize them and prove themselves worthy of their investment.
  • To attract patrons, we need to be teachable. Being teachable is to demonstrate both competency in our craft and a willingness to learn. In addition, influencers want to help and invest in others, so being teachable will make it easy for them to support our work.
  • Creative work is a team sport –success often comes in the form of artist and patron partnership. Unfortunately, while much of the focus has been on artists finding their patrons, it is easy to miss that patrons also need artists they can believe in and trust.
  • One way to find patrons is to find those people who are already investing in others and reach out to them. If we work hard on our craft and share our competencies, we can find those who can help our work spread. Instead of waiting to be noticed, we look for opportunities to allow ourselves to be taught and molded by those who show genuine interest in our work.

In summary, “The Starving Artist waits to be noticed. The Thriving Artist cultivates patrons.”

進展是一種交換

(從我一個尊敬的作家,賽斯·高汀

賽斯·高汀,赛斯·高汀,Seth Godin,Chinese,Translation

這很容易想像,在那裡,向前走幾步,我們的問題就會消失。

當然,悲觀主義者確信,明天不僅不會消失,還會讓事情變得更糟。

事實上其實很簡單:我們所做的一切,我們所做的,就是用一組問題去交換另一組問題。

但是問題會有一個特點。 他們能給一個機會讓我們看到如何高效地七前進。 這不是一個完全沒有問題的世界,而是一個有不同問題的情況,這也是項值得共舞的情況。

Multi-Class Image Classification Deep Learning Model for Chinese MNIST Characters Using TensorFlow

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The Chinese MNIST Characters dataset is a multi-class classification situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: The Chinese MNIST dataset uses data collected by Dr. K Nazarpour and Dr. M Chen for a project at Newcastle University. One hundred Chinese nationals took part in data collection. Each participant wrote with a standard black ink pen all 15 numbers in a table with 15 designated regions drawn on a white A4 paper. Each participant repeated this process ten times, with each sheet scanned at the 300×300 pixels resolution. It resulted in a dataset of 15000 images, each representing one character from a set of 15 characters.

In this iteration, we will construct a few simple CNN models to predict the shoe category based on the available images.

ANALYSIS: The one-layer CNN model’s performance achieved an average accuracy score of 92.79% on the test dataset after 15 epochs. The three-layer CNN model processed the same test dataset with an average accuracy measurement of 97.92%.

CONCLUSION: In this iteration, the simple CNN models appeared to be suitable for modeling this dataset. We should consider experimenting with TensorFlow for further modeling.

Dataset Used: Chinese MNIST Digit Recognizer

Dataset ML Model: Multi-class image classification with numerical attributes

Dataset Reference: https://www.kaggle.com/fedesoriano/chinese-mnist-digit-recognizer

One potential source of performance benchmarks: https://data.ncl.ac.uk/articles/Handwritten_Chinese_Numbers/10280831/1

The HTML formatted report can be found here on GitHub.

Regression Model for Kaggle Tabular Playground Series 2021 August Using Python and Scikit-learn

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Tabular Playground Series Aug 2021 dataset is a regression situation where we are trying to predict the value of a continuous variable.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions on Kaggle with fun but less complex, tabular datasets. The February dataset may be synthetic but is based on a real dataset and generated using a CTGAN. The original dataset tries to predict the loss from a loan default. Although the features are anonymized, they have properties relating to real-world features.

ANALYSIS: The average performance of the machine learning algorithms achieved an RMSE benchmark of 8.0771 using the training dataset. We selected ElasticNet and Gradient Boosting to perform the tuning exercises. After a series of tuning trials, the refined Gradient Boosting model processed the training dataset with a final RMSE score of 7.8563. When we processed Kaggle’s test dataset with the final model, the model achieved an RMSE score of 7.8416.

CONCLUSION: In this iteration, the Gradient Boosting model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Tabular Playground Series Aug 2021 Data Set

Dataset ML Model: Regression with numerical attributes

Dataset Reference: https://www.kaggle.com/c/tabular-playground-series-aug-2021

One potential source of performance benchmarks: https://www.kaggle.com/c/tabular-playground-series-aug-2021/leaderboard

The HTML formatted report can be found here on GitHub.

Algorithmic Trading Model using Stochastic Oscillator with Different Signal Levels

NOTE: This script is for learning purposes only and does not constitute a recommendation for buying or selling any stock mentioned in this script.

SUMMARY: This project aims to construct and test an algorithmic trading model and document the end-to-end steps using a template.

INTRODUCTION: This algorithmic trading model employs a simple mean-reversion strategy using the Stochastic Oscillator indicator for the entry and exit signals. The model will use a 14 look-back period with a three-period Simple Moving Average (SMA) for the %K indicator. The model also will use a three-period SMA of the %K indicator for the %D indicator. Thus, the model will initiate a long position when the %D indicator crosses the lower signal line from above. Conversely, the model will exit the long position when the %D indicator crosses the upper signal line from below.

We will compare two trading models with different signal line widths. The first model will use 20/80 for the lower and upper signal lines. The second model will use a tighter 20/50 for the lower and upper signal lines.

ANALYSIS: In this modeling iteration, we analyzed ten stocks between August 1, 2016, and September 3, 2021. The models’ performance appeared at the end of the script. The models with the wider signal line width generally produced a better return for the tested stocks. Moreover, the simple buy-and-hold approach came out ahead for all stocks.

CONCLUSION: For most stocks during the modeling time frame, the long-only trading strategy with the Stochastic Oscillator signals did not produce a better return than the buy-and-hold approach. We should consider modeling these stocks further by experimenting with more variations of the strategy.

Dataset ML Model: Time series analysis with numerical attributes

Dataset Used: Quandl

The HTML formatted report can be found here on GitHub.