Jeff Goins on Real Artists Don’t Starve, Part 11

In his book, Real Artists Don’t Starve: Timeless Strategies for Thriving in the New Creative Age, Jeff Goins discusses how we can apply prudent strategies in positioning ourselves for thriving in our chosen field of craft.

These are some of my favorite concepts and takeaways from reading the book.

Chapter 11, Diversify Your Portfolio

In this chapter, Jeff discusses how thriving artists should handle their projects and portfolios. He offers the following recommendations for us to think about:

  • The rule of portfolio says that we should strive to build a diverse body of work. While the starving artist believes he must master a single skill, the thriving artist masters more than one. Best artists regularly change and evolve; they do not restrict their art to a single form.
  • Thriving artists work like good investors. They do not just live off their art. They keep diverse portfolios and rely on multiple income streams. Building a diverse portfolio requires developing a leaky mental filter for spotting the right places to invest our time and resources.
  • A leaky mental filter is the ability to hold multiple conflicting ideas in tension to create synergy with each other. A skillful exercise of the leaky filter can give us insight into possibility as it allows us to identify new opportunities and take advantage of them.
  • If we want to create enduring work and not just a series of one-hit wonders, we must be open to learning new things. So while starving artists try to master only one skill, thriving artists acquire whatever skills necessary to get the job done.
  • There comes a time not to let our mind wander; instead, we dig in and focus. We focus on developing a body of work rather than just a single creation. Harnessing a distractable mind can be a strength in creative work. We can use our creative quirks to our advantage by identifying opportunities to do fulfilling work that we might have otherwise missed.
  • We must practice using our leaky filters to find new skills, learn them, and apply them. Then, while focusing on the big picture, we will use any skills and tools that will help us develop a more substantial portfolio, which can lead to a lifetime of creation.

In summary, “The Starving Artist masters one craft. The Thriving Artist masters many.”

所有的答案

(從我一個尊敬的作家,賽斯·高汀

在一種由專家經營的工業化經濟體中,要成為一個很有把握的人、一個擁有所有答案的人,這壓力非常的大。

更有價值的是一個一直在想問題的人,能夠找出尚未弄清楚的內容,並查看尚未發現的內容是一個更重要的優勢。

最難得的是一個謙遜(和自信)的人,能意識到即使是能作個問題清單,那也可能還是難上加難。 找到正確的問題才是我們真正需要做的事情。

Binary Classification Model for Kaggle Rice Seed Dataset Using Python and XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Rice Seed dataset is a binary classification situation where we attempt to predict one of the two possible outcomes.

INTRODUCTION: The dataset owner collected data on two different kinds of rice (Gonen and Jasmine). The goal is to train the best model that can correctly predict the rice crop.

ANALYSIS: The performance of the preliminary XGBoost model achieved an accuracy benchmark of 0.9903. After a series of tuning trials, the refined XGBoost model processed the training dataset with a final score of 0.9903. When we applied the final model to the test dataset, the model achieved an accuracy score of 0.9879.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Rice Seed Dataset

Dataset ML Model: Binary classification with numerical attributes

Dataset Reference: https://www.kaggle.com/seymasa/rice-dataset-gonenjasmine

One potential source of performance benchmark: https://www.kaggle.com/seymasa/rice-dataset-gonenjasmine

The HTML formatted report can be found here on GitHub.

Binary Classification Model for Kaggle Rice Seed Dataset Using Python and Scikit-learn

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Rice Seed dataset is a binary classification situation where we attempt to predict one of the two possible outcomes.

INTRODUCTION: The dataset owner collected data on two different kinds of rice (Gonen and Jasmine). The goal is to train the best model that can correctly predict the rice crop.

ANALYSIS: The average performance of the machine learning algorithms achieved an accuracy benchmark of 0.9881 using the training dataset. We selected k-Nearest Neighbors and Random Forest to perform the tuning exercises. After a series of tuning trials, the refined Random Forest model processed the training dataset with a final accuracy score of 0.9900. When we processed the test dataset using the final model, the model achieved an accuracy score of 0.9876.

CONCLUSION: In this iteration, the Random Forest model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Rice Seed Dataset

Dataset ML Model: Binary classification with numerical attributes

Dataset Reference: https://www.kaggle.com/seymasa/rice-dataset-gonenjasmine

One potential source of performance benchmark: https://www.kaggle.com/seymasa/rice-dataset-gonenjasmine

The HTML formatted report can be found here on GitHub.

Binary Classification Model for Kaggle Tabular Playground Series 2021 September Using Python and TensorFlow

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Tabular Playground September 2021 dataset is a binary classification situation where we attempt to predict one of the two possible outcomes.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions on Kaggle with fun but less complex, tabular datasets. The dataset used for this competition is synthetic but based on a real dataset and generated using a CTGAN. The original dataset deals with predicting whether a customer will file a claim on an insurance policy. Although the features are anonymized, they have properties relating to real-world features.

ANALYSIS: The performance of the cross-validated TensorFlow models achieved an average accuracy benchmark of 0.6891 after running for 50 epochs. When we applied the final model to Kaggle’s test dataset, the model achieved an accuracy score of 0.6189.

CONCLUSION: In this iteration, the TensorFlow model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Tabular Playground 2021 September Data Set

Dataset ML Model: Binary classification with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/c/tabular-playground-series-sep-2021

One potential source of performance benchmark: https://www.kaggle.com/c/tabular-playground-series-sep-2021/leaderboard

The HTML formatted report can be found here on GitHub.

Binary Classification Model for Kaggle Tabular Playground Series 2021 September Using Python and XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Tabular Playground September 2021 dataset is a binary classification situation where we attempt to predict one of the two possible outcomes.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions on Kaggle with fun but less complex, tabular datasets. The dataset used for this competition is synthetic but based on a real dataset and generated using a CTGAN. The original dataset deals with predicting whether a customer will file a claim on an insurance policy. Although the features are anonymized, they have properties relating to real-world features.

ANALYSIS: The performance of the preliminary XGBoost model achieved a ROC/AUC benchmark of 0.7256. After a series of tuning trials, the refined XGBoost model processed the training dataset with a final score of 0.7862. When we applied the last model to Kaggle’s test dataset, the model achieved a ROC/AUC score of 0.7865.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Tabular Playground 2021 September Data Set

Dataset ML Model: Binary classification with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/c/tabular-playground-series-sep-2021

One potential source of performance benchmark: https://www.kaggle.com/c/tabular-playground-series-sep-2021/leaderboard

The HTML formatted report can be found here on GitHub.

Binary Classification Model for Kaggle Tabular Playground Series 2021 September Using Python and Scikit-learn

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Tabular Playground September 2021 dataset is a binary classification situation where we attempt to predict one of the two possible outcomes.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions on Kaggle with fun but less complex, tabular datasets. The dataset used for this competition is synthetic but based on a real dataset and generated using a CTGAN. The original dataset deals with predicting whether a customer will file a claim on an insurance policy. Although the features are anonymized, they have properties relating to real-world features.

ANALYSIS: The average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 0.6214 using the training dataset. We selected Random Forest as the final model as it processed the training dataset with a final ROC/AUC score of 0.7361. When we processed Kaggle’s test dataset with the final model, the model achieved a ROC/AUC score of 0.7372.

CONCLUSION: In this iteration, the Random Forest model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Tabular Playground 2021 September Data Set

Dataset ML Model: Binary classification with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/c/tabular-playground-series-sep-2021

One potential source of performance benchmark: https://www.kaggle.com/c/tabular-playground-series-sep-2021/leaderboard

The HTML formatted report can be found here on GitHub.

Jeff Goins on Real Artists Don’t Starve, Part 10

In his book, Real Artists Don’t Starve: Timeless Strategies for Thriving in the New Creative Age, Jeff Goins discusses how we can apply prudent strategies in positioning ourselves for thriving in our chosen field of craft.

These are some of my favorite concepts and takeaways from reading the book.

Chapter 10, Own Your Work

In this chapter, Jeff discusses the delicate balance between owning our work and selling out to others. He offers the following recommendations for us to think about:

  • As creatives, the Rule of Ownership says our job is to create great work and protect those works. For any creative, the challenge of earning a living is formidable. However, if we sell off everything we make, we can end up starving again. The more we own of our work, the more creative control we have.
  • As an artist, our chief goal should be to make the work great. Sometimes we may need to make sacrifices or even walk away from great opportunities before achieving the goal. We do this not to hoard our gifts but to maintain the control we need to make our work excellent. We should be open to trade a short-term loss for a long-term gain.
  • Ownership is the insurance that can protect us from the gatekeeper system that might work against us. The Starving Artist tends to trust the system and hope it will take care of him. However, taking care of the artists often is not what the system was designed to do. Therefore, the safest place for our work is to stay with us.
  • When the time is right, it might make sense to sell out. We should always do this in the interest of the art, not as an act of desperation. Selling out in the wrong way, at the wrong time, and for the wrong reason is what we need to avoid. We would consider selling out because we believe selling our work to someone who can make it even better.
  • “We must own our masters or our masters will own us.”

In summary, “The Starving Artist sells out too soon. The Thriving Artist owns his work.”