Goodbye Pelican, Hello WordPress!

First of all, sorry for all of those who came here through Google, and were redirected to the homepage. I tried my best to preserve URLs but I couldn’t figure out a great way to do that.

For recurring readers, you may have noticed the site has changed. That’s because this blog is now powered by WordPress!

I’m generally not a fan of heavy-handed systems, but the user experience eventually convinced me this was the right route. I’m now using WordPress, and even the paid edition.

Why I chose WordPress

WordPress as a platform provides a lot of tools to simplify the blog authoring experience. With Pelican, my blog writing experience was the following:

  1. Create a new file in restructured text, add some boilerplate
  2. adding images requires copying the image to the images/ directory, then adding the image link by hand into the file.
  3. re-rendering the post over and over again.
  4. calling the execute script, which handles publishing the files to Github.

The disadvantages of the platform were:

  1. The iteration was slow, including the ability to quickly add and manipulate images.
  2. The experience was on desktop only, and Git to boot, so I had to have enough time to clone (or pull or push) a git repository and fire up a text editor. Not great for just jotting down a quick note.

WordPress reduces this whole process, and support both mobile and desktop:

  1. Create a new post in the UI
  2. Add images by just selecting the file. I can do basic modifications like crop and rotate directly in wordpress.
  3. click “publish”

Overall, the reduced friction has let me write posts more frequently, as well as use it as a bed for notes in the meantime.

There are also other benefits:

  • several themes available so I can quickly style.
  • mobile app
  • SEO friendly

And probably features I’m sure to discover as well.

So, welcome to my new WordPress blog!

Book Report: The Blue Zones

About the Book

The Blue Zones discusses areas across the world where people live unusually long, fulfilling lives. In these regions, the rate of people reaching 100 years old three times as many as those in the USA. In the book, author Dan Buettner states that studies on twins show that roughly 75% of the factors that play into longevity are environmental, with 25% genetic. As such, a change in habit or behavior can have a significant impact on one’s quality of life, and how long they live.

Dan then goes into each blue zone, and discusses the factors that are likely to play into their longevity.

Keys to Longevity

Reading through the four regions (Sardinia, Okinawa, Adventists in Loma Linda, and Nicoya), there are several common themes. I’ve grouped them as such below

Mostly Vegetarian Diet

A diet heavy on beans, vegetables (especially green, leafy ones), and nuts are common elements across all the regions. Each group consumes their set of vegetables regularly, and many include superfoods that are known to help prevent inflammation and cancer.

In the Adventist Health Study, a survey of thousands of Seventh-Day Church Adventists in Loma Linda, California, vegetarians had a 2 year longevity advantage in the study. In the same study, nut eaters had a 2 year advantage.

Regular, Moderate Alcohol

All of the regions drank alcohol regularly. Sardinians in particular drank Cannonau wine, a local variant that has three times the flavonoids of regular wine. Flavonoids help reduce inflammation, and risk of cancer, stroke, and heart disease.

Regular Tea and Coffee Consumption

Throughout the day, everyone drinks a significant amount of tea and coffee.

Reduced Added Sugar Consumption

The added sugar consumption of most surveyed is under 25 grams, half of the daily added sugar recommendation of 50 grams. Many only take added sugar with their coffee.

Sufficient Sun

The book calls out the significant amount of sun that the older generation of Okinawan get, with several hours of time out in the sun harvesting and growing vegetables.

Sufficient Water

In the Adventists in Loma Linda, Men who 5-6 glasses of water per day had a substantially lower risk of a fatal heart attack. Drinking increased soda, coffee, and cocoa resulted in a much higher fatal heart attack rate.

Physical Activity

In the Adventist Health Study, physical activity (30 minutes 3 times a week) resulted a 2 year longevity increase and decreased incidence of heart and stomach cancer. Modest activity and the benefit levels out at the marathoner level.

Eat a Light Dinner

Eat a light dinner. There’s no studies that support this, but it is common across all the regions for the dinner to be light. Most calories are consumed by noon, for those living in the region.

Have a Strong Social Network

All of the citizens in the region had a strong social network, visiting each other regularly. The feeling of need and purpose was common for all of those interviewed.

Thoughts on the Book

The Blue Zones was a pleasant and easy read, mixing in individual interviews and Dan’s perspective with discussions around specific behaviors that may contribute to longevity, with studies and summaries near the end.

From a statistical point, correlation does not always equal causation, and it’s important to keep that perspective when reading the book. The book highlights multiple behaviors that are linked to increased longevity, but it is hard to identify the individual factors that truly contribute to long term health.

However, sans some sort of manual on how the human body works, having a group of researches that have done the diligence of identifying the areas where people live the longest, healthiest lives, and extracting common elements, is extremely valuable. There will most likely be many parts of this book that will be disproved, but there is a lot of empirical evidence that these behaviors and practices will lead to a prolonged life with a higher quality of life.

Crafting pelican-export in 6 hours.

Over the past two or three days, I spent some deep work time on writing pelican-export, a tool to export posts from the pelican static blog creator to WordPress (with some easy hooks to add more). Overall I was happy with the project, not only because it was successful, but because I was able to get to something complete in a pretty short period of time: 6 hours. Reflecting, I owe this to the techniques I’ve learned to prototype quickly.

Here’s a timeline of how I iterated, with some analysis.

[20 minutes] Finding Prior Art

Before I start any project, I try to at least do a few quick web searches to see if what I want already exist. Searching for “pelican to wordpress” pulled up this blog post:

Which pointed at a git repo:

Fantastic! Something exists that I can use. Even if it doesn’t work off the bat, I can probably fix it, use it, and be on my way.

[60m] Trying to use pelican-to-wordpress

I started by cloning the repo, and looking through the code. From here I got some great ideas to quickly build this integration (e.g. discovering the xmlrpc-wordpress library). Unfortunately the code only supported markdown (mine are in restructuredtext), and there were a few things I wasn’t a fan of (constants including password in a file), so I decided to start doing some light refactoring.

I started organizing things into a package structure, and tried to use the Pelican Python package itself to do things like read the file contents (saves me the need to parse the text myself). While looking for those docs, I stumbled upon some issues in the pelican repository, suggesting that for exporting, one would want to write a plugin:

At this point, I decided to explore plugins.

[60m] Scaffolding and plugin structure.

Looking through the plugin docs, it seemed much easier than me trying to read in the pelican posts myself[ I had limited success with instantiating a pelican reader object directly, as it expects specific configuration variables.

So I started authoring a real package. Copying in the package scaffolding like from another repo, I added the minimum integration I needed to actually install the plugin into pelican and run it.

[60m] Rapid iteration with pdb.

At that point, I added a pdb statement into the integration, so I could quickly look at the data structures. Using that I crafted the code to migrate post formats in a few minutes:

    def process_post(self, content) -> Optional[WordPressPost]:
        """Create a wordpress post based on pelican content"""
        if content.status == "draft":
            return None
        post = WordPressPost()
        post.title = content.title
        post.slug = content.slug
        post.content = content.content
        # this conversion is required, as pelican uses a SafeDateTime
        # that python-wordpress-xmlrpc doesn't recognize as a valid date. = datetime.fromisoformat(
        post.term_names = {
            "category": [],
        if hasattr(content, "tags"):
            post.term_names["post_tag"] = [ for tag in content.tags]
        return post

I added a simlar pdb statement to the “finalized” pelican signal, and tested the client with hard-coded values. I was done as far as functionality was concerned!

[180m] Code cleanup and publishing

The bulk of my time after that was just smaller cleanup that I wanted to do from a code hygiene standpoint. Things like:

  • [70m] making the wordpress integration and interface, so it’s easy to hook in other exporters.
  • [40m] adding a configuration pattern to enable hooking in other exporters.
  • [10m] renaming the repo to it’s final name of pelican-export
  • [30m] adding readme and documentation.
  • [30m] publishing the package to pypi.

This was half of my time! Interesting how much time is spent just ensuring the right structure and practices for the long term.


I took every shortcut in my book to arrive at something functional, as quickly as I could. Techniques that saved me tons of time were:

  • Looking for prior art. Brainstorming how to do the work myself would have meant investigating potential avenues and evaluating how long it would take. Having an existing example, even if it didn’t work for me, helped me ramp up of the problem quickly.
  • Throwing code away. I had a significant amount of modified code in my forked exporter. But continuing that route would involve a significant investment in hacking and understanding the pelican library. Seeing that the plugin route existed, and testing it out, saved me several hours of time trying to hack and interface to private pelican APIs.
  • Using pdb to live write code. In Python especially, there’s no replacement to just introspecting and trying things. Authoring just enough code to integrate as a plugin to give me a fast feedback loop, and throwing a pdb statement to quickly learn the data structure, helped me find the ideal structure in about 10 minutes.

There was also a fair bit of Python expertise that I used to drive down the coding time, but what’s interesting is the biggest contributors to time savings were process: knowing the tricks on taking the right code approach, and iterating quickly, helped me get this done in effectively a single work day.

Book Report: Crucial Conversations

Crucial Conversations by Al Switzler, Joseph Grenny, and Rob McMillan is a book about how to ensure that difficult conversations result in a productive, positive result for everyone. The book lays out clear strategies to deal with some of the more difficult challenges around having conversations where tensions are high.

Thoughts on the Book

Overall, I think this is one of the best books I’ve read in 2019, if not the best. The techniques the book calls out are concrete, easy to understand, and easy to see where it applies (although actually applying it is much harder). I now use the crucial conversations framework to rethink all of my conversations, even when they aren’t crucial.

There’s a general theme that comes from this book, which is that it is always possible to achieve a positive result from a conversation. There are some great examples in the book, and even just considering a third path to make all parties happy is a game changer in and of itself.

Content Notes

Defining a Crucial Conversation

Crucial Conversations begins by defining what a crucial conversation is. A crucial conversation is defined as:

  • Stakes are high
  • Opinions vary
  • Emotions running high

The interesting thing about these conversations is that a conversation can turn crucial at any moment. As such, it’s important to ingrain techniques to lead a successful crucial conversation, as oftentimes it will occur without any prior warning, relying on instincts more than processed thoughts.

Avoid the Fools Choice

A common mistake in a crucial conversation is to believe that one must make a choice between providing truthful feedback and keeping the person’s feelings from being hurt. Those who successfully have crucial conversations are able to avoid hurt feelings and relationships, while being able to express their feedback clearly. The rest of the book discusses the strategies that work well to make that happen.

Focus on What you Really Want

When confronted with a situation or demeanor that makes you feel personally attacked, it’s easy to react in kind, such as being defensive or attacking the other person. Unfortunately, that behavior will most likely not achieve the outcome you’re looking for. Make sure to focus on your actions and ask yourself if it will get you closer to the outcome you want to achieve.

Watch for Signs the Conversation is Starting to Go Poorly

It’s a lot easier to reconcile a conversation if it’s only beginning to go poorly, in contrast to a conversation that has already been going poorly for a while. Look for warning signs around people (and yourself) getting defensive or emotional, to try to remedy the situation quickly.

  • The moment a conversation becomes crucial
  • Signs that someone doesn’t feel safe

Ways to behave in that situation:

  • slow down, step back, and re-evaluate the situation

When Someone Doesn’t Feel Safe

There are clear signs where someone doesn’t feel safe in the conversation. These include:

  • Forcing their opinion into the conversation
  • Withdrawing significantly from the conversation

People often withdraw not because of the feedback, but because they believe you don’t have their best interest in mind.

Don’t Sugar Coat the Message

Those who are good at crucial conversations don’t rely on using statements that reduce the severity of the message. Instead, they focus on stepping back and establishing safety.

Rubric to Ensure a Crucial Conversation

There are two major aspects of a conversation that require alignment, in order for a conversation to be successful:

  1. mutual respect: do both parties believe the other has their best interest in mind?
  2. mutual purpose: do both parties have the same goal in mind?

Determine which one is compromised by “stepping out” and evaluating the situation. Then take the appropriate action to remedy one or the other.

Establishing Mutual Respect: Apologizing or Contrasting

If the mutual respect was broken down by someone being offended, an apology can work. The other strategy is to use a “contrast” statement.

Contrasting is the process of using a “do not” and a “do” sentence to clarify your intent and align goals. The “do not” sentence calls out the threatened person’s concerns and clarifies that it is not your intention to threaten or otherwise hurt the other person. The “do” sentence clarifies what you are actually trying to do.

Mutual respect is established once both parties feel that the other party has their best interest in mind.

Establishing Mutual Purpose: CRIB

A conversation will not move forward unless both parties are working toward a common goal. A common purpose is the building block by which one can move the conversation forward, and retreat back if the conversation begins to focus on more superfluous aspects like how that goal is achieved.

There is a four-step process to get there, with the acronym “CRIB”:

  • Commit to mutual purpose. Agree to find a mutual purpose to work toward, you have to want to achieve a solution to actually achieve it.
  • Recognize the purpose. Why does someone want to achieve that goal? The why is more important than the how.
  • Invent a mutual purpose: Sometimes the goals are fundamentally incompatible, at which point a conversation will probably go nowhere. Thus, there is a need to look elsewhere for mutual purpose, probably even more higher-level than the current goal (e.g. the family’s happiness as a purpose over making more money).
  • Brainstorm new strategies: once a mutual purpose is established, it’s time to find the right strategy. Participants should keep an open mind and think outside the box.

Master your Emotions via Analyzing the Path to Action

The book calls out for reflection when one encounters an emotional situation. Making decisions while in a bad emotional state can often have poor results, as the decision doesn’t come from a well-reasoned, logical process.

To help yourself calm down and look at the situation logically, the book provides the following process:

  1. State the facts: help everyone understand how you arrived at that conclusion, by first stating what has explicitly occurred.
  2. Tell your story: explain your interpretation of the facts. One note here is to not downplay the story: adding apologetic phrases like “call me crazy” or “I’m probably wrong” reduces the confidence in your story being accurate from the get go.
  3. Ask for other’s paths: at this point, invite others into the conversation to share their story.
  4. Talk Tentatively: one skill is to build is to make sure that you are keeping an open mind as you hear other’s stories. Keep in mind the goal is to arrive at an amicable solution, and that requires being open to new facts and opinions.
  5. Encourage testing: again keeping in mind the goal is to arrive at an amicable solution, there needs to be a safe environment where everyone can ask questions and change the current story. Encourage everyone to share different ideas and theories.

Getting Others to Share Their Story

One common scenario is others not sharing their story due to the common deflection strategies of getting violent or getting quiet. Crucial Conversations provides the following workflow to keep the conversation going:

  1. Ask: before moving toward story sharing strategies, simply ask and see if you can get stories by asking the right questions.
  2. Mirror and Paraphrase: as you receive replies, make sure the other parties feel heard by paraphrasing to them what you’ve heard.
  3. Prime: if those techniques still lead to silence or violence, then it’s time to prime the conversation by taking a guess at what the other party is thinking. This is the last choice as it’s always better to get the meaning directly, but this requires a tentative “I think you’re feeling…”.

No Violent Agreement

Sometimes one can get caught up with winning some sort of battle. Those who are really effective at conversations stop discussing a topic they agree on, rather than nitpick in minor small details. If there is a component where there is a disagreement, they first acknowledge the agreement, then build on top of that instead.

To start the conversation around the places you disagree with, compare the differences and views equally.

Differentiate Discussions vs Decision Making

Discussions and getting opinions doesn’t mean that everyone gets control over making the decision. It’s important to determine the right decision making strategy depending on the situation.

Decision Making Strategies

The final chapter of the book with new content discusses different strategies in decision making, and where they’re appropriate.


Voting works well if the decision needs to be made quickly, and everyone agrees to commit to the result.


Consensus is needed if the stakes are high, and strong buy-in of all parties is required. Voting does not necessarily provide strong buy-in to the end result or decision.


Overall, crucial conversations was on the best books I’ve read in 2019, and is definitely on my list of books to read to achieve success overall in life. Oftentimes people take the relationship aspect of living for granted, and having a clear rubric to help achieve positive results in conversations is fantastic.

My Bullet Journal Setup

In march of 2019, I read The Bullet Journal Method: Track the Past, Order the Present, Design the Future by Ryder Carroll. In the book, Ryder discusses his process of using journaling to help focus one’s life. After reading the book, I took up the practice as well, taking a fair bit of liberty on the process. I’ll talk about my review of the book, as well as the process I arrived at.

Review of the Book

As is the case with many self-help books, this one could have been shorter. But having stories helps illustrate the major lessons, and helps embed the ideas in our minds.

The main philosophy of bullet journaling comes down to two techniques: jotting down notes to help one recall the important events, and periodically reviewing those notes to reflect and re-focus on what’s valuable.

Bullet Journaling does this by creating several reoccurring situations where one should review the notes and aggregate them. By re-writing the same notes over and over again as you aggregate them over to weekly or monthly summaries, the writer will eventually remove the ones that are unimportant, as it’s time consuming to copy over notes or to-dos.

The notes themselves are flexible and are allowed to contain a combination of to-dos, life events, and thoughts. One should annotate these different types with a different symbol.

I think skimming is enough to get enough context to reproduce or try the system out for yourself. You may want to read the whole book if you find that the system isn’t helping you in the way you want it.

My Bullet Journal Learnings and Setup

My Notes

When I started bullet journaling, I was faithfully following the system as prescribed. Eventually, I’ve landed on the following system:

  • 1 page at the beginning of the book for the big life goals
  • 1 page at the beginning of the book for to-do at some point in the future (that year)
  • 1-2 sheets for a giant to-do list to accomplish within the duration of the notebook
  • 1-2 sheets for a giant to-do list for the week
    • One category for work
    • One category for home / personal
  • 1-2 sheets per day
    • Instead of taking short notes, my notes are detailed, and usually span one or two sentences per event.
  • A few note categories including:
    • to-do (a dot that I cross out with an X when done, or a > when I need to push it out to the future)
    • event (a triangle)
  • Most notes generally have a number on the left-hand column that documents the amount of time I spent on the task (60m, 90m, 2h)

Here’s some more context on why I landed on this approach.

1 life goal page, 2 future pages, and 2 book pages

This is pretty identical with what the Bullet Journal method recommends.

1 life goal page with 3-4 goals of what to focus on broadly really does help keep me focused. It’s helpful to see that as the first page I flip to every time I open the journal. For 2019, this has helped me keep a much sharper focus on learning guitar and piano better, as well as improving my Japanese. Previously, I’d find myself wandering to whatever project was at the top of my mind at the time, which helps me feel productive short-term, but doesn’t align with what I really want to accomplish.

2 future pages works well just to document cool ideas or smaller projects in the future. For me, this includes conferences I’d like to speak at, activities like search and rescue, and interesting software projects. On the downside, I really don’t do anything here except for add to the list, which at some point will probably expand over 2 pages. But maybe that’s a good reason to cull it.

2 book pages documents tasks I should complete within the duration of the book. This section has generally gone unfinished, as my weekly to-do list is already too long for me to accomplish. But once in a while I will pull from this list and move it to my weekly list. Professionally, this has helped remind me what long-term projects I own and should deliver on. Examples for me in this section include driving a revamped interview process at work, or investigating the value of further investment in a tool my team owns.

Weekly to-do list

My weekly to-do list and my daily updates are the sections I refer to the most. The weekly list has been very helpful, especially professionally. As a manager, I’m often asked to own a bunch of fairly random projects and to-dos, where I needed some sort of project management just for my own day-to-day. Bullet journaling provides just enough organization to help me keep track of those action items, and keeps me focused on those before working on others.

My weekly ritual consists of copying those over, as well as aggregating to-dos left over from the individual days. A lot of my to-dos are small, and I typically clear them out the following Monday.

Having a to-do for the week personally has also been valuable. Again, the primary value there is to have a queue of projects so I can finish one before moving on, instead of just moving to a different project because it’s at the top of my head. It also helps to have a list of projects, as I typically only have 1-2 hours of free time a day. Often that free time will be spent on browsing the internet or watching TV unless I dive into an activity within 5-10 minutes of it starting (right after my kids go to bed).

Examples of what I put in this section include:

  • setup a meeting with so-and-so team on project X
  • personal: author a svn server setup using terraform
  • personal: fix a bug with X

Daily Notes

My daily notes are probably what differs the most from the standard bullet journal.

Using full paragraphs in the daily notes

One major difference: Every bullet is a 1-2 sentence paragraph, rather than a concise one liner.

I started with the one-liner, but found when reviewing a month or two later, I had very little idea what that note even pertained to. One major aspect that attracted me to bullet journaling was a written log where I can recall exactly where my time went to that day. This is generally useless to me if I can’t recall why that was important, what the impact of the event was, or my thoughts at the time on it.

Breaking that rule has made my bullet journaling much more time-consuming: it takes me 15-20 minutes of the day across the day to write down my full notes. But the benefit has been significant: I’m able to recall very specific milestones my kids achieved, remember important details in my discussions, and remember why a task was important, even six months down the line. Certainly worth taking time out of my day to enjoy these memories years from now.

Writing down the time a task took

One other difference (or slight alteration) is writing down the amount of time each event took. I do this by writing down the explicit amount on completion of the task.

No page numbers

One minor difference is I don’t write down page numbers. I’ve found little to no value in having them. There’s a technique in bullet journaling to extend sections by adding random pages in the book, and keeping track of these sections by writing down all the pages in a table of contents in the beginning, but I’ve found I’ve almost never had the need to extend a section. Instead, I’ll pre-allocate an extra page, and just suffer the random empty page if I over-allocate.

Journal and Writing Utensils

I’ve experimented with a few different types of notebooks and writing utensils. I’ve currently settled on:

Below I have some detailed thoughts.

Notebook Size

I wanted a notebook size that was fairly portable: I don’t always carry a bag with me, and it’s very inconvenient to always have one of your hands full with a notebook. If I don’t have the notebook with me at all times, I won’t be able to take notes or jot down thoughts at that precise moment.

After trying some larger and smaller sizes, I settled on the pocket format (3.5 by 5.5 inches), as many companies produce notebooks at that size. I originally wanted to standardize on A6 but it’s harder to find variants with a smaller page count. Anything smaller than the pocket size results in a day’s notes taking several pages.

Once I decided on the format, the next factor was page count. I’ve found with detailed notes a day can consume 2 pages. As the majority of pocket notebooks (such as field notes) are 48 pages (24 sheets), I would find myself going through a notebook every couple of weeks. This was a lot of work to rewrite the same front 5 pages over and over again. I was looking to rotate my books at most once a month, so I landed on the Moleskine Cahier, which has 64 pages.

Grid for Notebook Lines

For the line format, most journals offer plain, lined, grid, or dotted. I don’t have a strong preference between the three, but most notebook manufacturers have a much smaller grid than lines, which works well with my smaller handwriting.

This also motivated the decision for the cahier above: I would have preferred more pages (such as with the Moleskine volant) but they did not have a grid option, and the lines are quite large.

Pen: Something Waterproof and Fine

As my notebook side is quite small, I wanted a way to fit as much content on a page as possible. A part of that means choosing smaller lines, which means choosing a finer pen to ensure that what you write is legible.

I looked for the finest pens on Amazon, and tried a few out. I landed on the Pilot G-Tec-C as it provides very fine, clear lines. For a while I tried the Steadler fineliner (0.3mm width). It was working well, but I decided against using it further when I accidentally dropped my bullet journal in my bathtub, and all the pages written with the fineliner was completely illegible. The Pilot G-Tec-C pages were fine.

Here’s a few photos to help you understand the horror:

Pages written with the Steadler fineliner
Wet notebook with the G-Tec-C Pen

Summary and Final Thoughts

Overall this has been a life-changing book and practice for me: I’m a lot more focused on what I actually want to do, and It’s helping me recall some truly precious moments in my life.

The vanilla bullet journaling did not work well for me due to it’s terseness, so there’s definitely a need to customize to suit your needs and desires. And it’s good that Carroll calls that out in his book.

And as a finishing note, here’s a photo of 9 months of my bullet journals, bundled up, and summarizing my 2019 year.

Book Report: Trillion Dollar Coach

The Trillion Dollar Coach by Eric Schmidt describes the coaching style of Bill Campbell, a former football coach turned silicon valley and venture capital exec that played a critical role in the development of individuals such as Eric Schmidt and Steve Jobs.

If there is any lesson I feel like this book is imparting, it is about the need to focus and be heavily invested in the individual. Bill Campbell is known to have upfront feedback, and be heavily invested in you as a person outside of work. He learns your name, your families name, and their goals and aspirations as well.

Investment in the Individual

Bill Campbell’s coaching style focuses intensely on giving blunt feedback on the situation, and giving relentless support of your ideas. Blunt feedback can sometimes be taken negatively, but built upon a bedrock of clear desire for you to succeed, it can be received less negatively, and maybe even positively.

There are several examples in the book of negative feedback being given, or people laid off, but feeling good about the situation because of the way that Campbell delivered the feedback. Bill’s own quotes on this show that the intention is ingenuous: he feels that it’s important to emphasize that the individual is still valuable and can leave with “their head held high”.

I feel like there is a good lesson to learn here, about leadership: leaders who really care about their people, are leaders who can help their people grow.

If one views leadership solely as a position which is focused on delivering business goals, it’s one where people will not flourish, which may ultimately decided the fate of the company. Talented people like being somewhere where they are valued. Regardless of the compensation or perks, it’s hard to keep talent if they feel like they can be replaced and leadership will not care. That is primarily decided upon by the direct manager.

Focusing on the person also results in building a more personal relationship than what is normally considered okay at an organization. I think there is a concern that this can lead to favoritism, but I think that a good manager can balance this by ensuring that they are caring about all of their direct reports equally.

The other aspect of focusing on the person is backing them on their ideas. Being a person who helps get buy-in on ideas that their directs are trying to move forward, and displays that publicly, can build a strong report between manager and direct. Bill is also willing to go the extra mile, whether it’s connecting the individual, or help get the individual invites into the right organization or group.

Listening and Observing

Reading about Bill Campbell’s leadership, it definitely seems like one where he operates primarily in the background. Rather than be the driver of a big idea, Bill helps support those who wants to drive.

When one request’s Bill’s advice, he likes to jump into the meetings and observe. From there, he gives a recommendation privately to the person driving the initiative.

I think this type of leadership really helps the individual grow. It’s very tempting to go in as a manager and try to drive or fix things yourself, but that doesn’t provide a learning opportunity for the individual. Instead, I’ve found that letting someone take the lead, and rapidly and aggressively giving feedback, allows the person to achieve tremendous growth.

Concerns with The Book

  • The book claims to be data driven, but it seems like they are picking and choosing studies that correlated to the philosophy of Bill Campbell, rather than finding studies of effective managers and seeing if Campbell fits them.


I think Trillion Dollar Coach is a very helpful book to help one realize the need to be focused on growing and investing in your directs, rather than driving everything from the top. The book also goes over some great examples to help one understand better what it means to execute on these philosophies.

It’s very easy to get caught in the rat race of an organization and lose sight of what will really ensure sustained growth of a team. Books like these are great because it helps remind me the need to focus on people, and inspiring examples that showcase the impact that kind of leadership can have.

Book Report: The Signal and the Noise

The Signal and the Noise by Nate Silver focuses on the use of statistical models to forecast and predict real-life events such as weather or presidential elections. Nate Silver is also famous for running fivethirtyeight, a website dedicated to predicting outcomes of political races and sports. Nate was the author of PECOTA, an algorithm to determine the performance profiles of baseball players.

The book is split into two sections: one around a state-of-the-world description of difficult to predict or forecast events such as earthquakes, then goes into ways to improve these events.

For me, the big takeaways are:

Models Should Forecast Results on a Spectrum, Not a Single Yes / No Answer

Oftentimes the results of statistical models are announced as a single answer: e.g. there is a 10% chance of a magnitude 4 earthquake occurring tomorrow. This type of declaration can cause models to look less accurate than they really are, as it does not include the wide range of outcomes that have some chance of occurring.

Change the Model as New Data Comes in

A statistical forecast should be a calculation that consumes all available relevant data, to produce a forecast. When one does not modify the forecast when new data arrives, it can reduce the accuracy. This is a difficult concept to accept intuitively, as there is a common need for us as people to stick to a particular belief or philosophy. There is also the concern of being seen as a wishy-washy by changing your opinions quickly as a political candidate. It’s not an issue that can be resolved in all situations, but being open to new data is the important takeaway.

Beware of Overfitting

The title of the book is referring to the idea of overfitting, and how to derive accuracy from the incoming data. The number of variables one can consume for their forecast could be effectively infinite: as a result, one can build extremely complex statistical models that have worked well for all known data, but perform poorly when evaluated against new data.

This is the same problem that occurs for machine learning algorithms as well: utilizing the right datasets plays a central role in a successful machine learning algorithm, as introducing too much data can result in overfitting.

Side note: I think there’s a machine learning problem that could affect forecasts as well. Machine learning models can accidentally be trained to detect a different distinction or occurrence, such as when a military group tried to train their algorithm to detect tanks, they instead trained it to detect rainy and sunny days, as all pictures with tanks in them were rainy and the pictures without tanks were sunny.

The correlation != causation clause applies to statistical models as well: one could invest a completely unrelated statistical model, if coincidentally the input data matches the expected results. Poor data is also a problem for models.

Use Bayesian inference to Help Evaluate Probabilities

As a more abstract concept, Bayesian inference is the idea of slowly converging a forecast to the correct model by iteratively adjusting the model in light of new data that validates or invalidates an assumption. The book describes this process as “Bayes Theroem”, the underlying equation that is used to factor in corrections to the probabilities.

To begin this process, one requires a base model: this could be derived qualitatively, empirically, or calculated. With that as a base, the model is tuned with each incoming data point, either increasing or decreasing the likelihood of the event in question.

Bayesian inference is widely applicable because the base model can be a qualitative assumption, and still arrive at the correct model: it just takes longer if the initial assumption is incorrect.

I draw from this a rational approach to arriving at a great model: taking a model created with as much relevant data as possible, and slowly correcting it as new data comes in.

Nate makes a great point in this section, specifically on the approach that we validate or invalidate scientific claims. In an ideal world, all science would lead to conclusions using Bayesian inference: we become more confident in a specific model only as it accurately predicts more outcomes.

The Perceived Value of a Forecast

One interesting section discussed how the Weather Channel derives their forecasts. Their forecast could be more accurate, but they choose to skew their values for a better experience and better trust from the consumer. Changes such as:

  • rounding the small decimals to make the prediction easier to consumer
  • increasing the likelihood of rain, as being prepared for rain when it does not occur is a small issue in convenience and planning, but a pleasant surprise. In contrast, rain when sun is expected can lead to a significant inconvenience in the planning for the day, and as a result a significant hit on the trust of the forecasting system.

This leads me to a thought I come back to again and again: there is a pure scientific and logical solution to a problem, and there is a more nuanced problem around how humans interpret and understand these results. The Weather Channel’s choice to willfully reduce the accuracy of their results is an example, a conclusion driven by understanding which situations are most inconvenient for consumers.


Overall this was a fantastic read (or listen). It’s a pleasure to have the opportunity to better understand the thought process of Nate Silver and his perspective on the statistical community and the areas of improvement for the community.

Tech Notes: Debugging LLVM + Rust

I’m working on a programming language, writing the compiler in rust. I’m stuck at this point from a segfault that occurs with the following IR (generated by my compiler):

; ModuleID = 'main'
source_filename = "main"

define void @main() {
  %result = call i64 @fib(i64 1)

define i64 @fib(i64) {
  %alloca = alloca i64
  store i64 %0, i64* %alloca
  %load = load i64, i64* %alloca
  switch i64 %load, label %switchcomplete [
    i64 0, label %case
    i64 1, label %case1

switchcomplete:                                   ; preds = %case1, %entry, %case
  %load2 = load i64, i64* %alloca
  %binop = sub i64 %load2, 1
  %result = call i64 @fib(i64 %binop)
  %load3 = load i64, i64* %alloca
  %binop4 = sub i64 %load3, 2
  %result5 = call i64 @fib(i64 %binop4)
  %binop6 = add i64 %result, %result5
  ret i64 %binop6

case:                                             ; preds = %entry
  ret i64 0
  br label %switchcomplete

case1:                                            ; preds = %entry
  ret i64 1
  br label %switchcomplete

This segfaults whenever I run my compiler, which currently compiles the code and immediately executed it in LLVM’s MCJIT.


Whenever I run my code in my debugger, I find that I have a segfault which doesn’t occur (at least at the same time) as when I run my app on the command line.

VS Code’s debugger returns:

so something is happening during the FPPassManager. Apparently the FPPassManager is what handles generating code for functions (read in the source code)

getNumSuccessors was a bit nebulous for me… what does this function actually do? I wasn’t familiar with the term “successor”: it must be something custom to LLVM. Some Googling finds:

So I guess successor is referring to the number of statements that immediately follow the existing statement. getNumSuccessors in core.h of llvm specifies there are function calls for a terminator. So what precisely is a terminator?

Looking through the LLVM source code again, it’s the classification for instructions that will terminate a BasicBlock. The list from LLVM9 looks like:

  /* Terminator Instructions */
  LLVMRet            = 1,
  LLVMBr             = 2,
  LLVMSwitch         = 3,
  LLVMIndirectBr     = 4,
  LLVMInvoke         = 5,
  /* removed 6 due to API changes */

Looking at the traceback, this is specifically occurring in the updatePostDominatedByUnreachable. The source code for that is:

/// Add \p BB to PostDominatedByUnreachable set if applicable.
BranchProbabilityInfo::updatePostDominatedByUnreachable(const BasicBlock *BB) {
  const Instruction *TI = BB->getTerminator();
  if (TI->getNumSuccessors() == 0) {
    if (isa<UnreachableInst>(TI) ||
        // If this block is terminated by a call to
        // @llvm.experimental.deoptimize then treat it like an unreachable since
        // the @llvm.experimental.deoptimize call is expected to practically
        // never execute.

The actual errors occurs on the first instruction of the assembly instruction:

; id = {0x00012806}, range = [0x000000000093fbb0-0x000000000093fc3b), name="llvm::TerminatorInst::getNumSuccessors() const", mangled="_ZNK4llvm14TerminatorInst16getNumSuccessorsEv"
; Source location: unknown
555555E93BB0: 0F B6 47 10                movzbl 0x10(%rdi), %eax
555555E93BB4: 48 8D 15 81 3B D5 01       leaq   0x1d53b81(%rip), %rdx
555555E93BBB: 83 E8 18                   subl   $0x

I can’t read assembler very well. But since this is a method, most likely the first instruction has to do with loading the current object into memory. Most likely then, getNumSuccessors is receiving a pointer to something it doesn’t expect. Most likely this is an NPE.

My hunch now is I have a basic block without a terminator statement, causing the JIT pass to fail.

There was a missing return statement on the main function. Adding that didn’t change anything.

Fixing the blocks to only have terminators did indeed fix the issue! Ultimately figuring out that a validator existed, and heeding it’s error messages lead to the solution.

Tech Notes: Updating Unity for Cerebrawl

I’m interested in starting a habit of note taking while I take on some pretty difficult tasks, maybe as a learning experience for myself or others if they find it valuable.

Today, I’ll be tackling Updating Cerebrawl’s Unity from 5.6 to 2018.3.

This is actually pretty late in the journey: I’ve got a branch of 2018.3 working, I just need to figure out how to reconcile that with the month-and-a-half’s worth of changes that were made in the meantime.

My upgrade path thus far has been a combination of the following tools:

  • vscode, when I need to go look at live coude
  • sourcetree, when I need to do some fine-grained change picking
  • Unity, to see if the things runs.

Errors Again

Pulling up my branch again, there’s errors around the lack of a TMP_PRO namespace. It seems that TextMeshProUGUI doesn’t exist for TextMeshPro 1.3. Something to look into later, but for now commenting that out should be fine.

Next ran into a duplicate tk2dSkin.dll. It looks like that now goes in the “tk2d” directory, rather than “TK2DROOT”. So just delete the old one.

Cherry-Picking the New Changes

We had to revert the 2018 unity changes previously. Last time I tried to merge in the master branch (I use git-svn so it’s effectively the SVN tree), git I think got confused because I reverted a bunch of the changes I had done, breaking everything and requiring me to apply those changes again.

This time, I should only pull in the changed made after that point. I created another branch to keep my working changes from being broken and lost in history when I merge in other changes.

I can use git cherry-pick to specifically pick up diffs in that version range:

git cherry-pick b813563…5646829

Ran into multiple errors cherry-picking. Resolution is to pick up incoming changes again and again (these are Unity asset files so not ones I needed to touch to update).

Once those were done, I switched back to the Unity editor, let it load again.

It Works!

Huzzah! For the most part everything has migrated over. The biggest challenge on this one was upgrading tk2d toolkit, which was broken by newer Unity versions.

Merging Changes In

I hit another snag trying to merge files in. Git svn attempted to rebase my changes on top of the existing branch, which doesn’t work really well as it tries to merge diffs again.

My best hope is to basically construct a changeset that is all of the changes I made on what’s in SVN today. To do so I run:

git svn fetch
git checkout master
git reset --hard git-svn 
git clean -xdf
git checkout feature/merge-unity-2018
git reset --soft master
git commit

Finally a git svn push and all the changes have been made!

Book Report: How To Talk So Kids Will Listen… And Listen So Kids Will Talk


How to Talk so Kids Will Listen…And Listen So Kids Will Talk by Adele Faber and Elaine Mazlish is a book about tools to communicate and resolve issues with one’s children.

I find this interesting about every parenting book I read, but there’s always some tips that apply to pretty much anyone, not just children. I’ll call those out as I go.

The book goes through a variety of tips, many of them revolving around a child misbehaving or having a conflict. The authors give specific techniques that tackle conflicts from a variety of angles, including how to diffuse strong emotions, nurturing independent conflict resolution, and slowly changing chronic negative behavior.

I’ll focus on a few that stood out for me.

Naming Feelings

Studies have shown that children are able to cope better with their emotions if they have a way to describe it. By being specific with the names and describing them (“It sounds like you’re frustrated”), a parent can apply a dampener and expand their child’s vocabulary at the same time.

Give the Child Time to Vent and Brainstorm

One of the major themes in the book is nurturing independence: giving control of a situation to one’s children, and also the opportunity to come up with their own solution, builds conflict resolution skills that do not require direct intervention from the parent.

As a parent, I find this one a little difficult to uphold. There is an innate desire to be there for your children, and it is hard to toe the line between negligence and helicopter parenting. I often find myself wanting to do something for them, for example linking trains together when they’re struggling, or immediately jumping into a conflict.

Nevertheless, the authors advise giving the child some time to think about their own solution before giving them yours. For example, if a child asks the parent about whether they should go to their best friends sleepover, even if they don’t like the other kids, give the child some time to consider why it might be important to go, and see if they make a reasonable decision by themselves. If they come to a decision that you disagree with, that’s the time to start involving your own opinion.

The goal is to raise a child that can make these decisions on their own. By having them work it out with the parent observing rather than dictating, The child can build their own problem solving skills in a safe environment. This is preferable over the child having to scramble to grow these skills the first day their on their own.

Works for Adults?

As an analytical person, I often forget that when someone wants to talk, it’s often to vent about their own emotions, rather than necessarily brainstorming a solution.

In a conversation around a distressing situation, giving the other person time to fully vent about what happened, and letting them brainstorm out loud their own solutions, could be be a better option than giving your own opinion. It could also help nurture independence, if this is a professional manager / individual contributor relationship.

Have Children Draw Their Feelings

When a child has extreme emotions, it’s difficult to move from that point to discussing the situation and emotion more reasonably. The authors offer drawing emotions as a solution. Basically giving a pencil and paper to a child and asking them to draw their feelings. Keep having the child draw their feelings until they get into a better mood.

I haven’t had the opportunity to try this myself (mine is too young to give those type of instructions to), but I’m taking note on anything that can help with dealing with strong emotions.

Describe and Reinforce Positive Behaviors

Oftentimes, our off-handed comments can reinforce behaviors that we want to eliminate. Calling a child a trouble-maker, rowdy, or air-headed makes the child feel like those are the terms that describe them, further acting in that fashion.

The authors suggest refraining from using those types of words, and instead highlighting when the child shows positive attributes.

For example, for a child who seems normally forgetful, calling out a situation where they remembered and being very descriptive is helpful.

“Yes, you remembered to do your homework and bring it to the teacher. You are very responsible!”

This can help counter the pigeon-holing that adults typically do. It’s also good to consider that this type of phrasing can counteract other adults: sometimes it is not parents that use these phrases, but teachers and other family members.

Pigeon-holing descriptions can also come from off-handed comments to other parents. Children who overhear these statements can be affected as well.

Works for Adults?

As adults, we often get typecast into specific roles, and words that describe those attributes might pigeonhole us just much as it does for children.

If you’d like someone to change, I think calling out when that person exhibits those changes, and thoroughly describing it, is a great way to reinforce positive behavior.

Offer Choices

A common frustration comes from children not listening to their parents, especially in a time-sensitive situation.. Maybe a child does not want to stop what they are already doing, or they refuse to negotiate on a friend coming over.

When it’s impossible to make both sides happy, the authors recommend offering alternatives and letting the child choose one. In additiona to clarifying the original choice is out of the question, sometimes that little bit of control can help your child be more cooperative.

The example in the book is a friend of a child who refused to get out of a tree house to get picked up. In this situation, the mother offered the friend two choices: he could go down slowly as a sloth, or go down quickly like a monkey.

Effectively eliminating the original choice of staying in the tree, the options offered also catered to the friend’s natural desire to want to play. A simple “come down” did not suffice, but these options led the friend to jump down like a monkey, accomplish the desired goal.

Works For Adults?

I think the desire for control continues throughout our whole lives. Maybe when dealing with a difficult situation in a leadership position, the right solution might be offering control in an area where one can negotiate. It also helps others feel more invested.

Using “When They’re Ready”

Further on the theme of empowering children and reducing typecasting, the authors do not recommend describing what are likely temporary behaviors as personality traits. For example, giving an out on a child who refuses to say hi by saying they are “just shy” can result in the child internalizing that trait. Instead, using temporary phrases like “When they’re ready” can help the child understand that this isn’t a defining characteristic.


I think this book is a great introduction to multiple techniques on communicating well with children. Everyone has times when they find conflict resolution with their child is difficult, and having the right tools that not only resolve the conflict, but also nurture the child, is great.

I’m looking forward to trying these and maybe I’ll come back with some more thoughts.