Charlie Gilkey on Start Finishing, Part 3

In his book, Start Finishing: How to go from idea to done, Charlie Gilkey discusses how we can follow a nine-step method to convert an idea into a project and get the project done via a reality-based schedule.

These are some of my favorite concepts and takeaways from reading the book.

Chapter 2, Pick an Idea That Matters to You

In this chapter, Charlie discusses what idea we should make a project out of it. Of course, we all want to do the best work, but we often avoid asking ourselves the hard questions of what idea we should work on realizing. Finally, he offers the following views for us to think about:

  • When we try to choose an idea to work on, we thrash. Thrash is the emotional flailing we do when we do not fully commit to our best work. The more an idea matters to us, the more we will thrash. That is because the idea’s success or failure is critical to us.
  • Failure is inevitable when we try to do our best work. Doing our best work is showing up and dancing with uncertainty. Fortunately, failure can reveal what matters to us, show us when we are out of alignment on something, and reveal areas for growth.
  • The five hard questions to ask when picking the project that matters most:
    • When someone close to us asks what was the most important thing we have done over the last year, what would we say?
    • Which item on our idea list causes the most gut-level anguish when we consider cutting it from the list entirely?
    • Which item on our list are we most likely to create the schedule space to work on?
    • Which item on our list, if finished, will matter the most in the near or distant future?
    • Which item on the list is worth claiming one of our remaining “significant project” slots during our remaining lifespan?
  • Due to the limitations of time and energy, each decision carries an opportunity cost. We must let go of ideas that are not allowing us to thrive, so we can trade up the projects that do.

適當的時間

(從我一個尊敬的作家,賽斯·高汀

最終,一個社會的文化會指點我們在某件事上應該花多少時間。 他們稱其為“正確”ㄉ數額。一種教育應該花多長時間,或者一個商業提案。一項訂單交付的速度。一次購買新車要花多長時間。醫師要花多少時間看一個病人。應該要花多少時間來學習新技能或是參與新想法。

如果您與其他人花費的時間相同,那麼您可能會獲得類似的收益。

還有兩個其他的選擇值得考慮:

一個是花費比別人認為合理的時間多得多的時間,並適當的收費。也許這將導致成非同尋常的結果。

另一個是花費的時間遠遠少於您應該用的時間,並將省下來的時間投入到其他的流程或是替代方案中,導致成他人都無法忽視的收益。

當某人夠膽量的去重新組織時間堆棧時,文化常常也會跟隨著發生變化。

Regression Model for Diabetes 130-US Hospitals Using Python and XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Diabetes 130-US Hospitals dataset is a binary classification situation where we attempt to predict one of the two possible outcomes.

INTRODUCTION: The data set is the Diabetes 130-US Hospitals for years 1999-2008 donated to the University of California, Irvine (UCI) Machine Learning Repository. The dataset represents ten years (1999-2008) of clinical care at 130 US hospitals and integrated delivery networks. It includes over 50 features representing patient and hospital outcomes.

ANALYSIS: The performance of the preliminary XGBoost model achieved an accuracy benchmark of 63.72%. After a series of tuning trials, the refined XGBoost model processed the training dataset with a final accuracy score of 64.94%. When we processed the test dataset with the final model, the model achieved an accuracy score of 65.39%.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Diabetes 130-US Hospitals for years 1999-2008 Dataset

Dataset ML Model: Binary classification with numerical and categorical attributes

Dataset Reference: https://archive-beta.ics.uci.edu/ml/datasets/296

One potential source of performance benchmarks: http://www.hindawi.com/journals/bmri/2014/781670/

The HTML formatted report can be found here on GitHub.

Regression Model for Diabetes 130-US Hospitals Using Python and Scikit-learn

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Diabetes 130-US Hospitals dataset is a binary classification situation where we attempt to predict one of the two possible outcomes.

INTRODUCTION: The data set is the Diabetes 130-US Hospitals for years 1999-2008 donated to the University of California, Irvine (UCI) Machine Learning Repository. The dataset represents ten years (1999-2008) of clinical care at 130 US hospitals and integrated delivery networks. It includes over 50 features representing patient and hospital outcomes.

ANALYSIS: The average performance of the machine learning algorithms achieved an accuracy benchmark of 61.20% using the training dataset. We selected the Logistic Regression and Random Forest models to perform the tuning exercises. After a series of tuning trials, the refined Random Forest model processed the training dataset with a final accuracy score of 64.38%. When we processed the test dataset with the final model, the model achieved an accuracy score of 64.61%.

CONCLUSION: In this iteration, the Random Forest model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Diabetes 130-US Hospitals for years 1999-2008 Dataset

Dataset ML Model: Binary classification with numerical and categorical attributes

Dataset Reference: https://archive-beta.ics.uci.edu/ml/datasets/296

One potential source of performance benchmarks: http://www.hindawi.com/journals/bmri/2014/781670/

The HTML formatted report can be found here on GitHub.

Data Validation for Diabetes 130 US Hospitals Using Python and TensorFlow Data Validation

SUMMARY: The project aims to construct a data validation flow using TensorFlow Data Validation (TFDV) and document the end-to-end steps using a template. The Diabetes 130 US Hospitals dataset is a binary classification situation where we attempt to predict one of the two possible outcomes.

INTRODUCTION: The data set is the Diabetes 130-US Hospitals for years 1999-2008 donated to the University of California, Irvine (UCI) Machine Learning Repository. The dataset represents ten years (1999-2008) of clinical care at 130 US hospitals and integrated delivery networks. It includes over 50 features representing patient and hospital outcomes.

Additional Notes: I adapted this workflow from the TensorFlow Data Validation tutorial on TensorFlow.org (https://www.tensorflow.org/tfx/tutorials/data_validation/tfdv_basic). I also plan to build a TFDV script for validating future datasets and building machine learning models.

CONCLUSION: In this iteration, the data validation workflow helped to validate the features and structures of the training, validation, and test datasets. The workflow also generated statistics over different slices of data which can help track model and anomaly metrics.

Dataset Used: Diabetes 130-US Hospitals for years 1999-2008 Dataset

Dataset ML Model: Binary classification with numerical and categorical attributes

Dataset Reference: https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008

The HTML formatted report can be found here on GitHub.

Regression Model for Kaggle Tabular Playground Series 2021 Apr Using Python and AutoKeras

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Tabular Playground Apr 2021 dataset is a binary classification situation where we attempt to predict one of the two possible outcomes.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions on Kaggle with fun but less complex, tabular datasets. The dataset may be synthetic but is based on a real dataset and generated using a CTGAN. The original dataset tries to predict the amount of an insurance claim. Although the features are anonymized, they have properties relating to real-world features.

ANALYSIS: The performance of the cross-validated TensorFlow models achieved an average accuracy benchmark of 0.7702 after running for 45 trials. When we applied the final model to Kaggle’s test dataset, the model achieved an accuracy score of 0.7865.

CONCLUSION: In this iteration, the AutoKeras-generated TensorFlow model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Tabular Playground Series 2021 Apr Data Set

Dataset ML Model: Regression with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/c/tabular-playground-series-apr-2021

One potential source of performance benchmarks: https://www.kaggle.com/c/tabular-playground-series-apr-2021/leaderboard

The HTML formatted report can be found here on GitHub.

Regression Model for Kaggle Tabular Playground Series 2021 Apr Using Python and TensorFlow

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Tabular Playground Apr 2021 dataset is a binary classification situation where we attempt to predict one of the two possible outcomes.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions on Kaggle with fun but less complex, tabular datasets. The dataset used for this competition is synthetic but based on the real Titanic dataset and generated using a CTGAN. The statistical properties of this dataset are very similar to the original Titanic dataset, but there is no shortcut to cheat by using public labels for predictions.

ANALYSIS: The performance of the cross validated TensorFlow models achieved an average accuracy benchmark of 0.7689 after running for 15 epochs. When we applied the final model to Kaggle’s test dataset, the model achieved an accuracy score of 0.7831.

CONCLUSION: In this iteration, the TensorFlow model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Kaggle Tabular Playground Series 2021 Apr Data Set

Dataset ML Model: Regression with numerical and categorical attributes

Dataset Reference: https://www.kaggle.com/c/tabular-playground-series-apr-2021

One potential source of performance benchmarks: https://www.kaggle.com/c/tabular-playground-series-apr-2021/leaderboard

The HTML formatted report can be found here on GitHub.

Charlie Gilkey on Start Finishing, Part 2

In his book, Start Finishing: How to go from idea to done, Charlie Gilkey discusses how we can follow a nine-step method to convert an idea into a project and get the project done via a reality-based schedule.

These are some of my favorite concepts and takeaways from reading the book.

Chapter 2, Getting to Your Best Work

In this chapter, Charlie discusses the five challenges that keep us from doing our best work. He also discusses the five keys that we can use to mitigate those challenges. He offers the following views for us to think about:

  • The five challenges are competing priorities, head trash, no realistic plan, too few resources, and poor team alignment.
  • The five keys to overcoming the challenges are:
    • Intention: We need to start by asking ourselves the question of “why.” We also need to have a concrete result in mind for the finish line of the project.
    • Awareness: Awareness is knowing our best work and the conditions under which we will do and produce our best work. It is all about knowing ourselves.
    • Boundaries: We need to set up boundaries for our best work and from the things that keep us from doing it. Without the boundaries, it will be easy for something else to come into our environment and displace our best work.
    • Courage: When we are doing our best work, we will face a continual stream of obstacles and chances to back down and hide. When fear surfaces, courage and the faith it inspires are our way out of hiding.
    • Discipline: Discipline help to channel our energy into purposeful, constructive actions towards our best work. Habits are discipline made automatic. Developing habits that are conducive to doing our best work is the reason why we practice discipline.
  • Some keys are more effective at overcoming a particular challenge than others.
  • The five keys are skills with which we can cultivate and practice every day.