Annie Duke on How to Decide, Part 9

In her book, How to Decide: Simple Tools for Making Better Choices, Annie Duke discusses how to train our brains to combat our own bias and help ourselves make more confident and better decisions.

These are some of my favorite concepts and takeaways from reading the book.

Chapter 8, “The Power of Negative Thinking”

In this chapter, Annie Duke discusses how we can improve our decision-making skills by applying various techniques such as premortem, backcasting, precommitment contracts, and category decisions. She offers the following recommendations:

  • Think Positive, but Plan Negative:
    • When it comes to reaching our goals, most of us have an execution problem. For the most part, we know what we should do to reach a goal, but our decisions often lead us to a different outcome.
    • The gap between what we know we should do to achieve our goals and our decisions is called a behavior gap. Decision tools available can help us narrow the gap, and Negative Thinking is one of the most effective tools.
    • One tool of Negative Thinking is mental contrasting. When we conduct mental contrasting, we try to imagine what we want to accomplish and confront the obstacles that might stand in the way of reaching the goal.
  • Premortems and Backcasting:
    • Premortem is to imagine ourselves, at some time in the future, having failed to achieve a goal and look back at how we arrived at that outcome. There are four general steps to perform a premortem.
    • Step 1: Identify the goal or a specific decision we are considering.
    • Step 2: Figure out a reasonable time frame for achieving the goal or the decision.
    • Step 3: Imagine we are on the day after the decision period, and we did not achieve the goal as expected. List up to five reasons why we failed due to our actions or decisions.
    • Step 4: List up to five reasons why we failed due to things outside of our control.
    • Backcasting is the flip side of premortem. We perform backcasting in advance to imagine what our journey would be like had we succeeded. We list up to five reasons that are within and outside of our control.
    • We combine the four sets of reason from premortem and backcasting to form a Decision Exploration Table. We use the Decision Exploration Table to gain as much visibility as we can and take the following steps:
    • Step 1: Modify our decision to increase our odds of the good things happening and vice versa.
    • Step 2: Plan how we will react to the potential future outcomes to minimize surprises.
    • Step 3: We look for ways to mitigate the impact of bad outcomes should they occur.

信念與知識

(從我一個尊敬的作家,賽斯·高汀)

他們是不同的。

知識一直在變化。 當我們與世界互動,遇到數據或新體驗時,我們的知識就會發生變化。

但是信念就是我們所說的持久存在的事物,尤其是面對知識變化時。

儘管更多的知識可以改變信念,但通常不是那樣。 信念是一種文化的現象,是與我們周圍的人一起創造的。

分辨這兩個問題的簡單方法是:“您還需要什麼才能了解或是去學習改變你現有的主意?”

Algorithmic Trading Model for Trend-Following with Moving Averages Crossover Strategy Using Python Take 2

NOTE: This script is for learning purposes only and does not constitute a recommendation for buying or selling any stock mentioned in this script.

SUMMARY: This project aims to construct and test an algorithmic trading model and document the end-to-end steps using a template.

INTRODUCTION: This algorithmic trading model examines a simple trend-following strategy for a stock. The model enters a position when the price reaches either the highest or the lowest points for the last X number of days. The model will exit the trade when the stock’s fast and slow moving-average lines cross each other.

In addition to the stock price, the models will also use the trading volume indicator to confirm the buy/sell signal further. Finally, the strategy will also incorporate a fixed holding window. The system will exit the position when the holding window reaches the maximum window size.

From iteration Take1, we set up the models using a trend window size for long trades only. The window size varied from 10 to 50 trading days at a 5-day increment. We used 20 to 40 days for the fast-moving average and 50 to 80 days for the slow-moving average. The models also incorporated a volume indicator with a fixed window size of 10 days to confirm the buy/sell signal. Furthermore, we did not limit the holding period by setting the maximum holding period to 999 days for this iteration.

In this Take2 iteration, we will set up the models using a trend window size for short trades only. The window size will vary from 10 to 50 trading days at a 5-day increment. We will use 20 to 40 days for the fast-moving average and 50 to 80 days for the slow-moving average. The models will also incorporate a volume indicator with a fixed window size of 10 days to confirm the buy/sell signal. Furthermore, we will not limit the holding period by setting the maximum holding period to 999 days for this iteration.

ANALYSIS: From iteration Take1, we analyzed the stock prices for Apple Inc. (AAPL) between January 1, 2018, and February 19, 2021. The top trading model produced a profit of 92.77 dollars per share. The buy-and-hold approach yielded a gain of 87.70 dollars per share.

In this Take2 iteration, we analyzed the stock prices for Apple Inc. (AAPL) between January 1, 2018, and February 19, 2021. The top trading model produced a loss of 3.57 dollars per share. The buy-and-hold approach yielded a gain of 87.70 dollars per share.

CONCLUSION: For the stock of AAPL during the modeling time frame, the short-only trading strategy produced a much worse return than the buy-and-hold approach. However, we should consider modeling this stock further by experimenting with more variations of the strategy.

Dataset ML Model: Time series analysis with numerical attributes

Dataset Used: Quandl

The HTML formatted report can be found here on GitHub.

Multi-Class Image Classification Deep Learning Model for Flower Photos Using TensorFlow Take 4

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The Flower Photos dataset is a multi-class classification situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: The Flower Photos dataset is a collection of 3,670 flower photos in five different species. This dataset is part of the TensorFlow standard dataset collection.

From iteration Take1, we constructed and tuned a machine learning model using a simple three-layer MLP network. We also observed the best result that we could obtain using the validation dataset.

From iteration Take2, we constructed and tuned a machine learning model using the VGG-16 architecture. We also observed the best result that we could obtain using the validation dataset.

From iteration Take3, we constructed and tuned a machine learning model using the Inception V3 architecture. We also observed the best result that we could obtain using the validation dataset.

In this Take4 iteration, we will construct and tune a machine learning model using the ResNet50 V2 architecture. We will also observe the best result that we can obtain using the validation dataset.

ANALYSIS: From iteration Take1, the baseline model’s performance achieved an accuracy score of 80.24% after 25 epochs using the training dataset. The model also processed the validation dataset with an accuracy score of 74.69%.

From iteration Take2, the VGG-16 model’s performance achieved an accuracy score of 73.71% after 25 epochs using the training dataset. The model also processed the validation dataset with an accuracy score of 65.53%.

From iteration Take3, the Inception V3 model’s performance achieved an accuracy score of 75.89% after 25 epochs using the training dataset. The model also processed the validation dataset with an accuracy score of 72.50%.

In this Take4 iteration, the ResNet50 V2 model’s performance achieved an accuracy score of 77.94% after 25 epochs using the training dataset. The model also processed the validation dataset with an accuracy score of 72.91%.

CONCLUSION: In this iteration, the TensorFlow CNN model appeared to be suitable for modeling this dataset. We should consider experimenting with TensorFlow for further modeling.

Dataset Used: Flower Photos Dataset

Dataset ML Model: Multi-class image classification with numerical attributes

Dataset Reference: https://www.tensorflow.org/datasets/catalog/tf_flowers

One potential source of performance benchmarks: https://www.tensorflow.org/tutorials/images/classification

The HTML formatted report can be found here on GitHub.

Algorithmic Trading Model for Trend-Following with Moving Averages Crossover Strategy Using Python Take 1

NOTE: This script is for learning purposes only and does not constitute a recommendation for buying or selling any stock mentioned in this script.

SUMMARY: This project aims to construct and test an algorithmic trading model and document the end-to-end steps using a template.

INTRODUCTION: This algorithmic trading model examines a simple trend-following strategy for a stock. The model enters a position when the price reaches either the highest or the lowest points for the last X number of days. The model will exit the trade when the stock’s fast and slow moving-average lines cross each other.

In addition to the stock price, the models will also use the trading volume indicator to confirm the buy/sell signal further. Finally, the strategy will also incorporate a fixed holding window. The system will exit the position when the holding window reaches the maximum window size.

In this Take1 iteration, we will set up the models using a trend window size for long trades only. The window size will vary from 10 to 50 trading days at a 5-day increment. We will use 20 to 40 days for the fast-moving average and 50 to 80 days for the slow-moving average. The models will also incorporate a volume indicator with a fixed window size of 10 days to confirm the buy/sell signal. Furthermore, we will not limit the holding period by setting the maximum holding period to 999 days for this iteration.

ANALYSIS: In this Take1 iteration, we analyzed the stock prices for Apple Inc. (AAPL) between January 1, 2018, and February 19, 2021. The top trading model produced a profit of 92.77 dollars per share. The buy-and-hold approach yielded a gain of 87.70 dollars per share.

CONCLUSION: For the stock of AAPL during the modeling time frame, the long-only trading strategy produced a better return than the buy-and-hold approach. However, we should consider modeling this stock further by experimenting with more variations of the strategy.

Dataset ML Model: Time series analysis with numerical attributes

Dataset Used: Quandl

The HTML formatted report can be found here on GitHub.

Multi-Class Image Classification Deep Learning Model for Flower Photos Using TensorFlow Take 3

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The Flower Photos dataset is a multi-class classification situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: The Flower Photos dataset is a collection of 3,670 flower photos in five different species. This dataset is part of the TensorFlow standard dataset collection.

From iteration Take1, we constructed and tuned a machine learning model using a simple three-layer MLP network. We also observed the best result that we could obtain using the validation dataset.

From iteration Take2, we constructed and tuned a machine learning model using the VGG-16 architecture. We also observed the best result that we could obtain using the validation dataset.

In this Take3 iteration, we will construct and tune a machine learning model using the Inception V3 architecture. We will also observe the best result that we can obtain using the validation dataset.

ANALYSIS: From iteration Take1, the baseline model’s performance achieved an accuracy score of 80.24% after 25 epochs using the training dataset. The model also processed the validation dataset with an accuracy score of 74.69%.

From iteration Take2, the VGG-16 model’s performance achieved an accuracy score of 73.71% after 25 epochs using the training dataset. The model also processed the validation dataset with an accuracy score of 65.53%.

In this Take3 iteration, the Inception V3 model’s performance achieved an accuracy score of 75.89% after 25 epochs using the training dataset. The model also processed the validation dataset with an accuracy score of 72.50%.

CONCLUSION: In this iteration, the TensorFlow CNN model appeared to be suitable for modeling this dataset. We should consider experimenting with TensorFlow for further modeling.

Dataset Used: Flower Photos Dataset

Dataset ML Model: Multi-class image classification with numerical attributes

Dataset Reference: https://www.tensorflow.org/datasets/catalog/tf_flowers

One potential source of performance benchmarks: https://www.tensorflow.org/tutorials/images/classification

The HTML formatted report can be found here on GitHub.

Multi-Class Image Classification Deep Learning Model for Flower Photos Using TensorFlow Take 2

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The Flower Photos dataset is a multi-class classification situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: The Flower Photos dataset is a collection of 3,670 flower photos in five different species. This dataset is part of the TensorFlow standard dataset collection.

From iteration Take1, we constructed and tuned a machine learning model using a simple three-layer MLP network. We also observed the best result that we could obtain using the validation dataset.

In this Take2 iteration, we will construct and tune a machine learning model using the VGG-16 architecture. We will also observe the best result that we can obtain using the validation dataset.

ANALYSIS: From iteration Take1, the baseline model’s performance achieved an accuracy score of 80.24% after 25 epochs using the training dataset. After tuning the model, the model also processed the validation dataset with an accuracy score of 74.69%.

In this Take2 iteration, the VGG-16 model’s performance achieved an accuracy score of 73.71% after 25 epochs using the training dataset. After tuning the model, the model also processed the validation dataset with an accuracy score of 65.53%.

CONCLUSION: In this iteration, the TensorFlow CNN model appeared to be suitable for modeling this dataset. We should consider experimenting with TensorFlow for further modeling.

Dataset Used: Flower Photos Dataset

Dataset ML Model: Multi-class image classification with numerical attributes

Dataset Reference: https://www.tensorflow.org/datasets/catalog/tf_flowers

One potential source of performance benchmarks: https://www.tensorflow.org/tutorials/images/classification

The HTML formatted report can be found here on GitHub.

Annie Duke on How to Decide, Part 8

In her book, How to Decide: Simple Tools for Making Better Choices, Annie Duke discusses how to train our brains to combat our own bias and help ourselves make more confident and better decisions.

These are some of my favorite concepts and takeaways from reading the book.

Chapter 7, “Breaking Free from Analysis Paralysis”

In this chapter, Annie Duke discusses how we can spend our decision-making time more wisely and reaching working decisions faster. She offers the following recommendations:

  • A Sheep in Wolf’s Clothing:
    • We often get trapped and slow down in a decision-making process based on the closeness of the options. When two options are close to each other in payoff or quality, we become much slower in choosing.
    • Annie asserts that, when weighing two close options, the decision is easy. She also suggested that we ask this assessment question, “Whichever option I choose, how wrong can I be?”
    • When two options are close in payoff or quality, we can break through the bottleneck and decide quickly because, whichever one we choose, we cannot be so much far off or wrong.
  • Quitters Often Win, and Winners Often Quit:
    • Opportunity cost is another tool we can use to enhance our decision-making skills. When we pick an option, we lose the potential gains associated with the choice we do not pick.
    • Part of a good decision process includes asking ourselves, “If I pick this option, what’s the cost of quitting?” The lower the cost of quitting, the faster we can go. It is easier to unwind the decision and choose a different option, including options we may have rejected before.
    • Once we understand the importance of quitting a decision and making course adjustments, we can use the tool of decision stacking. Decision stacking is the prioritization habit of finding ways to make low-impact, the easy-to-quit decision in advance of a high-impact, harder-to-quit decision.
  • Is This Your Final Answer?
    • For every decision, there comes the point when we should stop analyzing and just decide. If our goal is to get to certainty about our choice on every decision, we will never be finished with the analysis.
    • We can ask ourselves this question, “Is there additional information that would establish a clearly preferred option or cause us to change our preferred option?” If yes, find that information first. If no, decide and move on.