<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.1.1">Jekyll</generator><link href="https://y.tsutsumi.io/feed/index.xml" rel="self" type="application/atom+xml" /><link href="https://y.tsutsumi.io/" rel="alternate" type="text/html" /><updated>2026-04-19T00:11:07+00:00</updated><id>https://y.tsutsumi.io/feed/index.xml</id><title type="html">Yusuke Tsutsumi</title><subtitle>My blog on software, productivity, and obsessively optimizing. I work at Google, ex-Zillow. Thoughts my own.</subtitle><entry><title type="html">Local, Parallel, and Autonomous: Building a Fully Agent-Generated Codebase</title><link href="https://y.tsutsumi.io/2026/03/30/agentic-orchestration-with-beads-and-lelouch/" rel="alternate" type="text/html" title="Local, Parallel, and Autonomous: Building a Fully Agent-Generated Codebase" /><published>2026-03-30T07:00:00+00:00</published><updated>2026-03-30T07:00:00+00:00</updated><id>https://y.tsutsumi.io/2026/03/30/agentic-orchestration-with-beads-and-lelouch</id><content type="html" xml:base="https://y.tsutsumi.io/2026/03/30/agentic-orchestration-with-beads-and-lelouch/"><![CDATA[<p>I’ve started a video series on agentic coding, and you can view the first video <a href="https://youtu.be/dXyPfslMLYE?si=uRz0qamR3sqWTW0i">here</a>. This is a short summary if you prefer a post instead!</p>

<p>The agentic workflow is something I’m calling “local, parallel, and autonomous”.
It’s a dive deep into making a whole codebase from scratch, completely
hands-off.</p>

<h4 id="the-core-attributes">The Core Attributes</h4>

<p>The main pillars for this workflow are straightforward:</p>

<ul>
  <li><strong>Local:</strong> All the git worktrees run locally on my machine, avoiding the complications of remote execution.</li>
  <li><strong>Parallel:</strong> It’s built to scale so multiple agents can work on different issues across multiple worktrees at the same time.</li>
  <li><strong>Autonomous:</strong> The agents complete issues from start to finish without human intervention.</li>
</ul>

<h4 id="choosing-the-right-projects">Choosing the Right Projects</h4>

<p>Let’s be clear: as of March 2026, there are a <em>lot</em> of risks with this approach. I wouldn’t confidently build and deploy a production SaaS web application this way because the necessary guardrails for safety and privacy requirements just aren’t there yet.</p>

<p>Because of that, I choose self-contained, smaller projects. I’ve built a few codebases this way, including:</p>

<ul>
  <li>Small VS Code extensions (one to <a href="https://github.com/toumorokoshi/vscode-hivemind">synchronize my dotfiles</a>, another to add missing file operations).</li>
  <li>A self-contained GitHub page that <a href="https://github.com/toumorokoshi/paste-as-simple-markdown">converts LaTeX to Markdown and plaintext</a>.</li>
  <li><a href="https://github.com/toumorokoshi/lelouch/tree/main/docs">Lelouch</a>, the workload orchestrator I use for this very process.</li>
</ul>

<p>For more complex, large-scale systems, I still rely heavily on a manual cycle where the AI generates code and I heavily review it.</p>

<h4 id="bootstrapping-setting-the-guardrails">Bootstrapping: Setting the Guardrails</h4>

<p>To get things started, I wrote a repository called <a href="https://github.com/toumorokoshi/agentic-bootstrap">agentic-bootstrap</a> to provide the scaffolding. Inside its <code class="language-plaintext highlighter-rouge">templates/</code> directory, I keep generic rules that help agents stay on the rails.</p>

<p>The bootstrap relies on a few key components:</p>

<ul>
  <li><strong>AGENTS.md:</strong> This file acts as the playbook. It tells agents to study the README first, ensures CI passes, enforces linting rules, and mandates that documentation is always updated.</li>
  <li><strong>specs/:</strong> This directory holds design specifications. Splitting the design up limits the context an agent has to load to accomplish a particular task, separating the “designer” agent from the “executor” agent.</li>
  <li><strong>Example Files:</strong> For the Heaptrack UI, I included a sample <code class="language-plaintext highlighter-rouge">.zst</code> file in the repo so the agent actually knows how to read the format it’s supposed to parse.</li>
</ul>

<h4 id="writing-specs-the-grill-approach">Writing Specs: The “Grill” Approach</h4>

<p>When it’s time to write the overall design, you can write it yourself, but I
used an approach called “grilling” to illustrate another, more agentic way to
bootstrap.</p>

<p>I tell the agent: <em>“Interview me about the purpose of this project”</em>. It asks me questions about architecture and UI, and I respond with my requirements—for example, that the app should be a purely client-side React app, include dark mode, and support drag-and-drop.</p>

<p>The agent then writes out the design documents in the <code class="language-plaintext highlighter-rouge">specs/</code> directory. I usually do a quick offline review; for instance, the agent once hallucinated some weird typography requirements asking for “Apple’s clear glass” style, which I promptly deleted.</p>

<h4 id="managing-the-queue-with-beads">Managing the Queue with Beads</h4>

<p>For the issue database, I use a tool called Beads, created by Steve Yegge. It acts as a local database for issues, supports being reused across multiple work trees, and is highly agent-friendly with support for <code class="language-plaintext highlighter-rouge">--json</code> flags.</p>

<p>You simply initialize it with <code class="language-plaintext highlighter-rouge">bd init --stealth</code>. From there, you can manage the queue:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">bd create</code> to make new issues.</li>
  <li><code class="language-plaintext highlighter-rouge">bd update {id} --status=open</code> or <code class="language-plaintext highlighter-rouge">--status=closed</code> to manage state.</li>
  <li><code class="language-plaintext highlighter-rouge">bd delete {id}</code> to remove them.</li>
</ul>

<p>I like to use a “thinking” model for a seeder prompt to write highly detailed initial issues, and I use the Beads VS Code extension to visually track what the agents are working on.</p>

<h4 id="orchestration-with-lelouch">Orchestration with Lelouch</h4>

<p>To orchestrate the workers, I wrote <a href="https://github.com/toumorokoshi/lelouch">Lelouch</a>. It’s effectively a way to run an agent, one per github worktree, pulling issues out of the Beads database.</p>

<p>To initialize it just run <code class="language-plaintext highlighter-rouge">lelouch init</code> per worktree. It monitors your
repositories, pulling open tasks from the Beads database by priority and
dispatching them.</p>

<p>I typically run Lelouch with the Gemini Flash model via CLI because it has a
more forgiving usage limit. I also add a pre-prompt like <em>“run tests, commit,
and push”</em> to make sure it follows through.</p>

<p>When I run <code class="language-plaintext highlighter-rouge">lelouch run -v</code>, the loop begins. It moves issues to “in progress” and logs the agent’s responses directly into the working notes, so I can see exactly what it’s up to. It’s incredibly satisfying to just watch the issues update in the VS Code extension while the agents hum away.</p>

<h4 id="handling-bugs-and-gaps">Handling Bugs and Gaps</h4>

<p>The process isn’t perfect. During my Heaptrack build, the agent created a <code class="language-plaintext highlighter-rouge">gaps.md</code> file but failed to actually implement the missing features. I had to manually prompt it: <em>“There are gaps in gaps.md that are not yet implemented. Please create BD issues for them.”</em>.</p>

<p>When I tested the UI, I found that the flame graph had sizing issues and the <code class="language-plaintext highlighter-rouge">.zst</code> file wasn’t loading properly.</p>

<p>Fixing this is a matter of adding issues to the command line via <code class="language-plaintext highlighter-rouge">bd q "fix the
bug with the zst file not outputting the flame graph"</code>. Lelouch immediately
picks it up in the background.</p>

<h4 id="takeaways-for-2026">Takeaways for 2026</h4>

<p>I’ve done this with four or five projects now, and having a central issue database with parallel worker agents is a scalable model.</p>

<p>I know that there’s been existing examples online of people who have done
everything from write compilers over to full web browsers, and games with
varying success. Despite that, I’m still skeptical and looking for ways to get it to be a consistent, high quality enough generator that it can be used for more production code. To get there, I think it’ll take more guardrails, especially security-related ones. Some projects are already using agents with a security persona to try to get there.</p>

<p>But all technology has to start somewhere, and this workflow serves as an interesting baseline for my process I intend to grow.</p>]]></content><author><name></name></author><category term="coding" /><summary type="html"><![CDATA[I’ve started a video series on agentic coding, and you can view the first video here. This is a short summary if you prefer a post instead!]]></summary></entry><entry><title type="html">The New AI Era</title><link href="https://y.tsutsumi.io/the-new-ai-era" rel="alternate" type="text/html" title="The New AI Era" /><published>2026-03-16T07:00:00+00:00</published><updated>2026-03-16T07:00:00+00:00</updated><id>https://y.tsutsumi.io/the-new-ai-era</id><content type="html" xml:base="https://y.tsutsumi.io/the-new-ai-era"><![CDATA[<h1 id="the-new-ai-era">The new AI Era</h1>

<p>Recently I read about <a href="https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16dd04">Steve Yegge’s post about GasTown</a>, an ecosystem of agent orchestration to go get monster-sized projects done.</p>

<p>I was a bit slow on the uptake when it came to AI: a lot of code last year was hand-written when many had already started letting agents do most of it. My thought was, at the time, the models themselves were often unable to produce a working feature. Let alone produce <em>good</em> code.</p>

<p>But only a year later, with models like Claude Opus 4.6 and Gemini 3.1 pro, these agents are generating code extremely quickly. And the code is actually… decent. It’s not mind-blowing - the agent still makes multiple stylistic mistakes, despite my prompt telling it not to:</p>

<ul>
  <li>It does not adopt functional programming methodologies, freely mixing IO with business logic.</li>
  <li>It repeats itself constantly.</li>
  <li>It often re-writes things it can use a library for.</li>
</ul>

<p>But it’s good enough where I can get into a productive loop.</p>

<h2 id="my-workflow-today">My workflow today</h2>

<p>My workflow looks like:</p>

<ol>
  <li>Write up a design or prompt (I usually like to document my actual written prose somewhere, usually a DESIGN.md).</li>
  <li>Ask Cursor / Claude / Gemini to implement it.</li>
  <li>Review the code, ask it to fix things. Back to 1 if needed.</li>
  <li>Validate the feature e2e. Back to 1 if needed.</li>
  <li>Commit and push.</li>
</ol>

<p>I’d argue this is very hands-on. But even with this level of micro-managing, I get a ton of changes done in a short period. Over 1.5 hours on a random Saturday, I got through 7 feature requests in <a href="https://github.com/aep-dev/aep-e2e-validator/commits/main/">aep-e2e-validator</a>, and I felt good about the code and the result of those. Probably 3-4x what I could with a coding-first approach, and with probably one half of the mental load.</p>

<p>Oh, and I also created a code extension to fill in some missing gaps I had in openvsx (https://github.com/toumorokoshi/code-fileclrk). And a couple bugs in other repositories. So I guess that’s more like 10 contributions in a 1.5 hour period, and a 5x increase?</p>

<p>Regardless the multiplier at this point is astounding. As Steve said in his blog post, the bottleneck here really is no longer the code.</p>

<h2 id="my-workflow-in-the-future">My workflow in the future</h2>

<p>I thought the above was great, but then I read GasTown. To be clear, I am a still a pessimist on AI, so I don’t want to say I’ve drunk the Kool-aid just yet. But the extremely detailed post helped me really understand what an 100x future could look like. It’s basically layers of abstractions, but with AI agents filling almost every single piece of the puzzle:</p>

<ul>
  <li>agents are writing code.</li>
  <li>agents are reviewing PRs.</li>
  <li>agents are helping summarize the changes and writing them out to a record of what decisions were made.</li>
  <li>you talk to an agent to help dispatch these tasks to other agents.</li>
</ul>

<p>And so on - multiple role-based agents, that fill whatever gap you’ve encountered with the other agents you’ve already deployed. Assuming the agent is good enough, or you’ve anchored the manual review to the point of no return (e.g. before performing a financial transaction, or perhaps cutting a release), you could delegate the next level of review over and over again.</p>

<p>And intuitively - this kind of makes a wacky kind of sense, where you are <em>abstracting yourself to the point where you are focusing on the problem where human judgement is truly needed</em>. In the past, for someone to get something accomplished with software, you had to work on every single piece by hand (perhaps except the hardware you run on). But now, if there’s a task you don’t find particularly appealing, and as long as you’re willing to do some manual review, you can definitely delegate that to an agent!</p>

<h3 id="the-caveat-you-need-the-domain-knowledge-to-succeed">The caveat: you need the domain knowledge to succeed</h3>

<p>Some might read the above and believe that this means that an individual does not need the same skills that the agent does. For example, a non-technical person can write software. I don’t really believe that’s the case, at least not 100%.</p>

<p>For one, it’s important to know that there are just some concerns that are so mission-critical that you cannot reasonably accomplish that without some sort of expertise or awareness. Those are:</p>

<ul>
  <li>Proper resource management, quotas, and monitoring so you don’t get a giant check that backrupts you.</li>
  <li>Security. If your service is compromised, the damage is irreperable.</li>
</ul>

<p>But there’s a huge swath of things that you don’t really have to understand too deeply at the beginning. And as long as you have the skills and time to dive in, AI could take the first swag at it:</p>

<ul>
  <li>command-line interfaces.</li>
  <li>applications that run locally.</li>
  <li>any codebases that have significant guardrails (linting / testing / etc).</li>
</ul>

<h2 id="final-thoughts">Final Thoughts</h2>

<h3 id="so-what-is-the-bottleneck-now">So what is the bottleneck now?</h3>

<p>It feels like the bottleneck now is the human’s ability to reason about a problem and figure out a good solution. I suppose one can argue that human reasoning was always the bottleneck, but I think the abstraction is at a higher level now.</p>

<p>Oh, and money / energy. You can produce as much as you have money to pay for an agent to generate for you. That or your human ability to think about these problems, which limits you first.</p>

<h3 id="human-interaction-is-still-needed">Human interaction is still needed</h3>

<p>In this fever dream of AI, there are no humans. But to make real change happen in an organization, you still need to talk to people, and that’s the real challenge.</p>

<p>Pretty much everything that has real impact, to some extent, requires thoughtful interaction with others:</p>

<ul>
  <li>Getting buy-in on the idea.</li>
  <li>Getting others to use it (sharing demos, recordings, talking to people in 1-1s and through slack).</li>
  <li>Getting approvals to merge or ship the code.</li>
  <li>Getting the team that owns the product to accept your idea and put it on their roadmap (or accept your code).</li>
</ul>

<p>So although some coding is trivial now, I don’t believe that the bulk of the my work, for example, is removed. Communication and interaction with individuals is still key.</p>

<h3 id="agents-give-us-the-freedom-to-dive-into-the-problem-we-want-to">Agents give us the freedom to dive into the problem we want to</h3>

<p>Perhaps there is a world where someone chooses to completely ignore code or a particular engineering problem algother. But I don’t really see this as my motivation.</p>

<p>I think agents provide us the freedom to work on the problems we want to, or perhaps minimize the time we spend on the problems we don’t want to work on. We have a fairly autonomous coder / problem solver than can get things <em>mostly</em> right, and we can trust if we need to.</p>

<p>But if we want to, we can dive right in! And there’s reasons to do so: the code may have gotten too messy, the abstractions are not easily understood, or the code is not performant.</p>

<p>And if you want to assert that level of control and code something yourself, you can! So again, in some ways it’s refreshing in that you can spend more of your time on the problems <em>you</em> care about, not just the rote ones that need to be done.</p>

<h3 id="when-and-how-will-we-get-to-this-glorious-future">When and how will we get to this glorious future?</h3>

<p>So when can we get to this world where agents completely autonomously write their own code and produce real, working products?</p>

<p>I think for some software engineers, they’re basically there today. GasTown seems like it’s run fairly automated with agents working on it all the time. I don’t think throwing agents at compilers and browsers has produced a working product either.</p>

<p>For me personally, I don’t know if I have a project that really is accelerated by 100 autonomous agents all writing code. But I’d like to start climbing this maturity ladder to see if I can squeeze out a little bit more of that time to think about the problems I’m really interested in diving into.</p>]]></content><author><name></name></author><category term="coding" /><summary type="html"><![CDATA[The new AI Era]]></summary></entry><entry><title type="html">DUGS: Data Uniquess via Gradient Similarity</title><link href="https://y.tsutsumi.io/dugs" rel="alternate" type="text/html" title="DUGS: Data Uniquess via Gradient Similarity" /><published>2026-03-08T07:00:00+00:00</published><updated>2026-03-08T07:00:00+00:00</updated><id>https://y.tsutsumi.io/data-uniqueness-via-gradient-similarity-dugs</id><content type="html" xml:base="https://y.tsutsumi.io/dugs"><![CDATA[<h1 id="data-uniqueness-via-gradient-similarity">Data Uniqueness via Gradient Similarity</h1>

<h2 id="summary">Summary</h2>

<p>This post outlines an experiment I ran to try to better understand the unique data inputs I’m using for model training. This is a modification of DVGS - “Data Valuation via Gradient Similarity”, that I’m calling DUGS (data uniquess via gradient simlarity).</p>

<h2 id="algorithm-high-level">Algorithm high level</h2>

<p>The high level idea is:</p>

<ol>
  <li>Grab the following:
a. An already trained model
b. A random sampling of the data you would like to evalute of size X</li>
  <li>Initialize a map of <code class="language-plaintext highlighter-rouge">{data_point_id, uniquess_dimension}</code> to store differing data points.</li>
  <li>Run a training pass on each datapoint. Calculate the gradient up to a layer Z.</li>
  <li>Reduce the dimensionality of that vector via a random matrix <code class="language-plaintext highlighter-rouge">{gradient_size, uniqueness_dimension}</code> to reduce the cost of calculating gradient similarity. (Johnson-Lindenstrauss projections)</li>
  <li>Use cosine similarity to compare it to existing vectors in the list
    <ol>
      <li>if the similarity is less than some threshold (0.3 by default) from any of the previous data points, add the new data point to the list.</li>
    </ol>
  </li>
</ol>

<h2 id="tunable-parameters">Tunable parameters</h2>

<h3 id="data-set-size">Data Set Size</h3>

<p>The size of the data set you are running the evaluation on will affect the runtime of the algorithm linearly.</p>

<h3 id="uniqueness-dimension-via-johnson-lindestrauss-lemma-error">Uniqueness dimension via Johnson-lindestrauss lemma error</h3>

<p>The smaller the dimension size you can reduce to, the more efficient the calculation is.</p>

<p>The Johnson-Lindenstrauss Lemma states that preserving the vector dimensionality within an error bound <code class="language-plaintext highlighter-rouge">e</code> for a dataset size <code class="language-plaintext highlighter-rouge">N</code> is calculated as:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>k &gt; (8 * ln(N) / e^2)
</code></pre></div></div>

<p>Although we are only seeking a limited number of datasets, making this match the target <em>full dataset</em> size ensures that there is sufficient dimensionality to understand the nuances between the dataset. The main tunable parameter is the margin of error introduced by the reduction of matrix for comparison.</p>

<p>For example, for 10,000 images, and a desired error of 0.001, you would have to have a base 2 dimension of roughly <code class="language-plaintext highlighter-rouge">8 * ln(10000) / (0.001^2)) = 73682722</code>, or somewhere between 2^26 ~ 2^27 (rounding to some base 2 size matrix is preferred to align with scheduling on processors which primarily have cache sizes, multithreaded processor / code SM counts that align to some multiple of 2).</p>

<h3 id="number-of-layers-to-backpropagate">Number of layers to backpropagate</h3>

<p>This one is a model-specific choice: the more layers that backpropagation is run, the runtime complexity will multiplied by the cost to compute the backpropagation of that layer.</p>

<p>As a rule of thumb, I think you want to only go back to the number of a relatively low number of layers: there is an intuition that the later layers of a model are the ones more correlated to fine-tuned behavior for the specific task. This is probably the behavior that one is most interested in, such as wehn trying to find more datapoints of similar, but underrepresented data, to increase the distribution thereof.</p>

<h2 id="results">Results</h2>

<p>I tried this out in my sandbox against my toy model <a href="https://github.com/toumorokoshi/yft-ml-sandbox/tree/main/alexnet_dvgs">based on AlexNet</a>, which uses the <a href="https://github.com/fastai/imagenette">Imagenette dataset</a>. Looking at the results visually, it looks like it was able to differentiate and find examples of each of the 10 categories:</p>

<p><img src="../../assets/2026-03-08-data-uniqueness-via-gradient-similarity-dugs.png" alt="2026-03-08-data-uniqueness-via-gradient-similarity-dugs.png" /></p>

<h3 id="future-experiments-to-try">Future Experiments to try</h3>

<ul>
  <li>focus on the gradient of a specific layer (e.g. the embeddings)</li>
  <li>use dimensionality reduction via Johnson-Lindenstrauss projections</li>
  <li>can we use some form of k means clustering to help group the data points?
    <ul>
      <li>this would be helpful to see how many “categories” of data there are.</li>
    </ul>
  </li>
</ul>]]></content><author><name></name></author><category term="coding" /><category term="ml" /><summary type="html"><![CDATA[Data Uniqueness via Gradient Similarity]]></summary></entry><entry><title type="html">Setting up Hibernate on Linux</title><link href="https://y.tsutsumi.io/setting-up-hibernate-linux" rel="alternate" type="text/html" title="Setting up Hibernate on Linux" /><published>2026-02-23T07:00:00+00:00</published><updated>2026-02-23T07:00:00+00:00</updated><id>https://y.tsutsumi.io/setting-up-hibernate-linux</id><content type="html" xml:base="https://y.tsutsumi.io/setting-up-hibernate-linux"><![CDATA[<h1 id="setting-up-hibernate-on-linux">Setting up Hibernate on Linux</h1>

<p>My Framework 13 laptop has an issue where the battery drains very quickly, even when sleeping.</p>

<p>This seems to be a known issue with Framework laptops. Since I often just use my work computer during the weekdays, the laptop is dead before I can use it again.</p>

<h2 id="reviewing-sleep-states-available">Reviewing sleep states available</h2>

<p>In Linux, there are specific levels of sleep states that can be used:</p>

<ul>
  <li>freeze (S0ix)</li>
  <li>standby (S1): rarely used in modern systems.</li>
  <li>mem (S3): hardware is powered off except ram.</li>
  <li>disk (S4): hibernate. system state is moved to disk.</li>
</ul>

<h2 id="checking-power-states-available">Checking power states available</h2>

<p>First I checked what was available with:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cat</span> /sys/power/state
<span class="c"># freeze mem</span>
</code></pre></div></div>

<p>So freeze and mem are available.</p>

<h2 id="configuring-hibernate-with-luks">Configuring hibernate with LUKS</h2>

<p>I like to use LUKS (Linux Unified Key Setup) to encrypt my drives. To do so with hibernate, the following workflow is required:</p>

<ol>
  <li>use the bootloader to loader the initramfs</li>
  <li>have the initramfs decrypt the LVM volume that contains the main partition as well as swap.</li>
  <li>the initramfs detects that the swap partition is available and restores the system state from it, if a hibernate image is found.</li>
</ol>

<h3 id="configuring-the-partitions">Configuring the partitions</h3>

<p>The final partition structure will look something like:</p>

<ul>
  <li>/dev/nvme0n1p1 (EFI / bootloader)</li>
  <li>/dev/nvme0n1p2 Linux filesystem partition: an unencrypted boot partition that contains the OS needed to perform the decryption of the encrypted drive.</li>
  <li>/dev/nvme0n1p3 Linux filesystem partition: the encrypted disk partition.
    <ul>
      <li>/dev/mapper/dm-crypt-0
        <ul>
          <li>/dev/mapper/ubuntu–vg-ubuntu–lv: the actual OS.</li>
          <li>/dev/mapper/swap: the swap space that the partition will hibernate to.</li>
        </ul>
      </li>
    </ul>
  </li>
</ul>

<h2 id="steps">Steps</h2>

<p><em>NOTE</em>: these were with my install of ubuntu 25.10. YMMV.</p>

<h3 id="resize-the-partition-and-make-a-swap-partition">Resize the partition and make a swap partition</h3>

<ol>
  <li>use a live USB so I can modify the partition</li>
  <li>open the luks partition: <code class="language-plaintext highlighter-rouge">sudo cryptsetup open ${partition} ${lvm-volume-name}</code></li>
  <li>reduce the size: <code class="language-plaintext highlighter-rouge">sudo lvreduce -r -L -128G /dev/mapper/${root-lvm-partition}</code></li>
  <li>create the volume: <code class="language-plaintext highlighter-rouge">sudo lvcreate -L 128G -n swap ${lvm-volume-name}</code></li>
  <li>format as swap: <code class="language-plaintext highlighter-rouge">sudo mkswap /dev/mapper/${swap-lvm-partition}</code></li>
</ol>

<h3 id="make-the-partition-swap">Make the partition swap</h3>

<p>This ensures the swap partition is available on boot to suspend to.</p>

<p>When back into the OS:</p>

<ol>
  <li><code class="language-plaintext highlighter-rouge">sudo swapon /dev/mapper/${swap-partition-name}</code></li>
  <li>Update <code class="language-plaintext highlighter-rouge">/etc/fstab</code> to mount the swap on boot:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/dev/mapper/{swap-partition-name} none swap sw 0 0
</code></pre></div></div>

<h3 id="set-the-swap-partition-as-a-hibernate-partition">set the swap partition as a hibernate partition</h3>

<p>This tells the kernel where to look for the hibernate image when resuming.</p>

<ol>
  <li>modify <code class="language-plaintext highlighter-rouge">GRUB_CMDLINE_LINUX_DEFAULT</code> in <code class="language-plaintext highlighter-rouge">/etc/grub/config</code> with:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>resume=/dev/mapper/{swap-partition-name}
</code></pre></div></div>

<ol>
  <li>run <code class="language-plaintext highlighter-rouge">sudo update-grub</code></li>
</ol>

<h3 id="enable-hibernate-in-the-linux-kernel-image">enable hibernate in the linux kernel image</h3>

<p>This is required for the initramfs to know how to restore from the hibernate image.</p>

<ol>
  <li>create file <code class="language-plaintext highlighter-rouge">/etc/initramfs-tools/conf.d/resume</code> with:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>RESUME=/dev/mapper/{swap-partition-name}
</code></pre></div></div>

<ol>
  <li>run <code class="language-plaintext highlighter-rouge">sudo update-initramfs -u -k all</code></li>
</ol>

<h3 id="disable-secure-boot">disable secure boot</h3>

<p>The last step is to disable secure boot in the BIOS.</p>

<p>Theoretically there is a way to sign the hibernated image with a TPM (trusted partner module) or a MOK (machine-only-key), but I didn’t want to have to re-encrypt my volume.</p>

<h2 id="conclusion">Conclusion</h2>

<p>Success! After the above, I can now hibernate my laptop.</p>

<p>Even with my 64GB of RAM, the hibernate / restore has been pretty quick so far: roughly 20 seconds at most to come back up from a full hibernate. Good solution for laptops I use once in 24 hours.</p>]]></content><author><name></name></author><category term="linux" /><category term="hibernate" /><summary type="html"><![CDATA[Setting up Hibernate on Linux]]></summary></entry><entry><title type="html">My Diet in 2026</title><link href="https://y.tsutsumi.io/diet" rel="alternate" type="text/html" title="My Diet in 2026" /><published>2026-02-16T07:00:00+00:00</published><updated>2026-02-16T07:00:00+00:00</updated><id>https://y.tsutsumi.io/diet-2026</id><content type="html" xml:base="https://y.tsutsumi.io/diet"><![CDATA[<p>This is are some notes about what I eat, how I eat, and why.</p>

<p>see <a href="/diet/2022">/diet/2022</a> for an older version of this article.</p>

<h2 id="short-checklist">Short checklist</h2>

<ul>
  <li>Vegan diet</li>
  <li>Drink caffeinated drinks (e.g. coffee) to suppress appetite, and to <a href="https://medicine.nus.edu.sg/news/caffeine-helps-restore-memory-function-after-sleep-loss-nus-medicine-study-shows/">help combat memory loss</a>.</li>
  <li>Sodas are a great way to suppress appetite.</li>
  <li>Target 1.6g protein / kg body weight day.</li>
  <li>Target 45g fiber / day.</li>
  <li>Try to get as close to zero for saturated fat intake.</li>
  <li>Target 1400kcal for cutting, 1800kcal for bulking.
    <ul>
      <li>when bulking, do a slow bulk to build muscle.</li>
    </ul>
  </li>
  <li>targetting low carb diet (50 carbs / day).</li>
  <li>Moderate sodium intake</li>
  <li>Try to hit 100% RDA on potassium (this is very, very hard): this helps counteract bloating caused by sodium.</li>
</ul>

<h2 id="example-diets">Example diets</h2>

<table>
  <thead>
    <tr>
      <th>Food</th>
      <th>Calories</th>
      <th>Protein</th>
      <th>Fiber</th>
      <th>Carbs</th>
      <th>Time</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>1/2 Soy Latte (180ml) + fiber</td>
      <td>32</td>
      <td>2.5</td>
      <td>4</td>
      <td>0.5</td>
      <td>8:00am</td>
    </tr>
    <tr>
      <td>Keto Bread + 60g Protein Spread</td>
      <td>100</td>
      <td>16</td>
      <td>8</td>
      <td>3</td>
      <td>10:00am</td>
    </tr>
    <tr>
      <td>Sparkling water or diet soda</td>
      <td>0</td>
      <td>0</td>
      <td>0</td>
      <td>0</td>
      <td>11:30am</td>
    </tr>
    <tr>
      <td>Keto Bread + 50g Protein Spread</td>
      <td>100</td>
      <td>16</td>
      <td>8</td>
      <td>3</td>
      <td>12:00am</td>
    </tr>
    <tr>
      <td>1/2 Soy Latte (180ml) + fiber</td>
      <td>32</td>
      <td>2.5</td>
      <td>4</td>
      <td>0.5</td>
      <td>1:00pm</td>
    </tr>
    <tr>
      <td>Huel Black (1/2 serving)</td>
      <td>200</td>
      <td>20</td>
      <td>4</td>
      <td>9</td>
      <td>2:00pm</td>
    </tr>
    <tr>
      <td>Powder</td>
      <td>120</td>
      <td>20</td>
      <td>4</td>
      <td> </td>
      <td>4:00pm</td>
    </tr>
    <tr>
      <td>Keto Bread + 50g Protein Spread</td>
      <td>100</td>
      <td>16</td>
      <td>8</td>
      <td>3</td>
      <td>5:00pm</td>
    </tr>
    <tr>
      <td>whatever for 20g protein defecit</td>
      <td>??</td>
      <td>??</td>
      <td>?</td>
      <td> </td>
      <td>after work</td>
    </tr>
    <tr>
      <td>Total</td>
      <td>840g</td>
      <td>100g</td>
      <td>45g</td>
      <td> </td>
      <td> </td>
    </tr>
  </tbody>
</table>

<h2 id="major-ideas">Major Ideas</h2>

<h3 id="mostly-vegan-plant-based-diet">Mostly vegan (plant-based diet)</h3>

<p><a href="https://y.tsutsumi.io/2020/03/04/book-report-the-blue-zones/">People who eat a plant-based diet live 7 years longer</a>.</p>

<p>It’s hard for me to go completely vegan (I like my seafood and dairy), so I try to go for vegan meals most of the week at home. When I eat out I eat vegetarian or seafood.</p>

<p>From what I read in Blue Zones (above), eating meat once a week still helps you get a lot of the longevity benefits.</p>

<p>Those who have plant-based high protein diets also tend to have better kidney health than those who dairy / meat based high protein diets.</p>

<h3 id="high-protein">High-protein</h3>

<p>I eat 1.6 grams of protein per 1KG of body weight a day at minimum. Studies have shown that this is the amount that has shown to maximize muscular hypertrophy. I strive to maximize muscle mass ultimately for increasing my healthspan.</p>

<p>With a vegan diet, that number likely should be higher: vegan proteins are not always complete, and may not be in an easily digestable form unlike meat.</p>

<p>However, I do strive for foods that have a <a href="https://en.wikipedia.org/wiki/Digestible_Indispensable_Amino_Acid_Score">diaas score</a> close to 1 or greater:</p>

<ul>
  <li>soy protein isolate, soybeans, soymilk, tofu.</li>
  <li>pea protein.</li>
  <li>beans (kidney, fava).</li>
</ul>

<h3 id="targetting-caloric-restriction">Targetting Caloric Restriction</h3>

<p>I’m currently in the middle of trying to lose weight: my last Dexa scan in 2022/10 put me at 21% body fat, which is much higher than I’m hoping. My current target is 15% body fat, which means I have to lose 12 lbs of body weight.</p>

<p>To that end, I’m looking at some minor caloric restriction: at 1700 kcal / day, and assuming I can build up a daily caloric expendature of 2100 kcal, that would be me at 3500 / (2100 - 1700) ~ 8.75 days to lose a pound of weight. And ideally I could my weight in 108 days (~4 months).</p>

<p>I’m mixing that with lifting weights to try to guarantee muscle hypertrophy.</p>

<p>In reality my caloric intake is often higher due to limited self-control, but I’m working on that.</p>

<h2 id="other-tips">Other Tips</h2>

<h3 id="sucralose-may-cause-nafld">Sucralose may cause NAFLD</h3>

<p>When I started eating four servings of Huel protein powder a day (to try to up
my protein intake, those 80 grams help) along with drinking a sugar-free sweet
coffee daily, I found that my ALT and AST numbers have grown significantly
(11-&gt;16 and 17-&gt;25, respectively over 10 months).</p>

<p>I’m not sure the precise cause, by after doing some research I found some
research in mice that shows sucralose and stevia (and regular sugar) consumption
increasing ALT and AST.</p>

<p>Correlation is not causation, but I’m currently (2023/04) trying to lower my
Sucralose consumption significantly to see if my numbers change at all.</p>

<h2 id="only-allulose-and-stevia-for-non-nutritive-sweeteners">Only Allulose and Stevia for non-nutritive sweeteners</h2>

<p>Several non-nutritive sweeteners have downsides:</p>

<ul>
  <li>Aspartame may cause an insulin response.</li>
  <li>Erithrytol may cause blood clotting, and impact the blood/brain barrier.</li>
  <li>Sucralose may cause NAFLD.</li>
</ul>

<p>Which leaves a limited set of non-nutritive sweeteners to choose from:</p>

<ul>
  <li>Allulose</li>
  <li>Stevia</li>
</ul>

<h2 id="specific-food-recommendations">Specific Food Recommendations</h2>

<p>To hit a 50 carb target with 1400 kcal, you need to target foods that provide
roughly 1 carb per 26 kcal or less.</p>

<p>Also, unlike most low carb diets, I prefer to stay low in saturated fat as well.
It’s a hard balance but I try to make sure these foods have relatively low
saturated fat.</p>

<p>For my specific dietary preferences, my recommendations are:</p>

<table>
  <thead>
    <tr>
      <th>Food</th>
      <th>Calories</th>
      <th>Protein</th>
      <th>Fiber</th>
      <th>Carbs</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>1/2 Soy Latte (180ml) + inulin fiber</td>
      <td>32</td>
      <td>2.5</td>
      <td>2</td>
      <td>0.5</td>
    </tr>
    <tr>
      <td>Huel Black (1/2 serving)</td>
      <td>200</td>
      <td>20</td>
      <td>4</td>
      <td>9</td>
    </tr>
    <tr>
      <td>Nature’s own Keto Bread</td>
      <td>35</td>
      <td>6</td>
      <td>9</td>
      <td>1</td>
    </tr>
    <tr>
      <td>Simple Truth Protein Crackers</td>
      <td>120</td>
      <td>10</td>
      <td>6</td>
      <td>4</td>
    </tr>
    <tr>
      <td>Almonds</td>
      <td>160</td>
      <td>6</td>
      <td>3</td>
      <td>3</td>
    </tr>
    <tr>
      <td>Spinach (100g)</td>
      <td>23</td>
      <td>2.9</td>
      <td>2.2</td>
      <td>3.6</td>
    </tr>
  </tbody>
</table>

<h3 id="nespresso">Nespresso</h3>

<p>I like to drink Nespresso every morning. I generally may make a mezzo (half
americano, half soy latte) with the following:</p>

<ul>
  <li>Mezzo with Tropical Coconut Flavor + 180ml soy milk.</li>
  <li>Straight: Peppermint Pinwheel.</li>
  <li>Bianco Doppio with 120ml milk.</li>
  <li>2x Altissio with 180ml milk (I like this because they offer a decaf option).</li>
</ul>

<p>Sometimes I replace the soy with a larger volume of macadamia milk (fewer
calories).</p>

<h3 id="good-foods">Good Foods</h3>

<ul>
  <li>Pistachio: relatively lower carbs, low saturated fat, high potassium.
    <ul>
      <li>
        <h2 id="high-phosphorous-so-try-to-eat-it-in-moderation">high phosphorous so try to eat it in moderation.</h2>
      </li>
    </ul>
  </li>
</ul>]]></content><author><name></name></author><category term="diet" /><category term="health" /><summary type="html"><![CDATA[This is are some notes about what I eat, how I eat, and why.]]></summary></entry><entry><title type="html">farsi</title><link href="https://y.tsutsumi.io/languages/farsi" rel="alternate" type="text/html" title="farsi" /><published>2026-02-09T07:00:00+00:00</published><updated>2026-02-09T07:00:00+00:00</updated><id>https://y.tsutsumi.io/languages/farsi-grammar</id><content type="html" xml:base="https://y.tsutsumi.io/languages/farsi"><![CDATA[<h1 id="farsi-grammar">Farsi Grammar</h1>

<p>In my spare time, I’m learning Farsi. I’ll use this as a kitchen sink for my notes on the language.</p>

<h2 id="peyda-konam">Peyda Konam</h2>

<p>In farsi, there are phrases such as</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ادامه پیدل کنم
</code></pre></div></div>

<p>The peyda konam (found X) is used in a situation where a direct action is not taken to make the object achieve that state.</p>

<p>For example, you could say “if the revolution continues…” - but this does not refer to any specific object ensuring the revolution does continue.</p>

<p>This is where “peyda konam” comes in. Since it’s nebulous whether it will actually happen, it’s about it finding itself in a situation when it does occur vs actively being done.</p>

<h2 id="mitoonestan">Mitoonestan</h2>

<p>The word mitoonestan (می‌توانستم) is the past continuous/imperfect form of the verb tavānestan (توانستن), which means “to be able to.”</p>

<p>Mitoonestan is interesting, because it is a model / auxiliary verb. Model / auxiliary verbs require a <em>main verb</em> to clarify what is possible / what is desired / etc.</p>

<p>So applying the past tense to these case is complicated, and the way that it’s done in farsi is to conjugate the modal verb, not the main verb.</p>

<p>e.g. I couldn’t do it is نمتستن کنم.</p>

<h2 id="the-word-az-is-used-sometimes-where-its-not-in-english">The word “az” is used sometimes where it’s not in english</h2>

<p>Sometimes you might hear an Iranian use phrases like “thanks from” when speaking English. This is be cause in Persian, the word for from, “از”, is used as a preposition where it is not in English.</p>

<p>For example, “thanks for” is actually “thankful from” (تشکر از).</p>

<h2 id="always-use-keh-even-for-who">Always use “keh”, even for “who”</h2>

<p>Unlike English, where we use “who” for people and “what” for things, Farsi uses “keh” for both.</p>

<p>For example, “who is there?” is “کی اونجاست؟” (key unjāst?) - but “what is there?” is “چی اونجاست؟” (chi unjāst?).</p>

<h2 id="use-chand-ta-for-unknown-quantities">Use “chand ta” for unknown quantities</h2>

<p>When you’re asking about a quantity of something, and you don’t know the exact number, use “chand ta” (چند تا).</p>

<p>For example, “how many apples are there?” is “چند تا سیب اونجاست؟” (chand ta sib unjāst?).</p>

<p>This is in contrast to “chand hafte ha” (چند هفته ها) which is used when you’re saying a quantity (plural is attached to the object).</p>]]></content><author><name></name></author><category term="language-learning" /><summary type="html"><![CDATA[Farsi Grammar]]></summary></entry><entry><title type="html">Distributed System Design Interview</title><link href="https://y.tsutsumi.io/interviews/system-design" rel="alternate" type="text/html" title="Distributed System Design Interview" /><published>2026-02-02T07:00:00+00:00</published><updated>2026-02-02T07:00:00+00:00</updated><id>https://y.tsutsumi.io/interviews/blog-system-design-interview</id><content type="html" xml:base="https://y.tsutsumi.io/interviews/system-design"><![CDATA[<h1 id="distributed-system-design-interview">Distributed System Design Interview</h1>

<p>I’ve been thinking a lot about how to do well in system design interviews
recently. Aside from the obvious utility in getting an offer as a software
engineer, I think there is some happy union where being good at a system design
<em>interview</em> also can provide a good rubric at doing off-the-cuff system design.</p>

<p>Here’s my notes on my ideal workflow, that I plan on iterating on as time allows.</p>

<ol>
  <li>gather requirements</li>
  <li>API design (maybe you could do this during “gather requirements”  as you
figure out usage).</li>
  <li>block diagram</li>
  <li>data storage</li>
</ol>

<h2 id="1-gather-requirements">1: gather requirements</h2>

<p>I think requirements have a minimum checklist: have you thought about
every intrinsic attribute that could affect your system?</p>

<p>The common ones, in my opinion, are:</p>

<ul>
  <li>usage requirements (start with this first)</li>
  <li>seasonality in usage patterns (e.g. are certain days / months heavier than others)</li>
  <li>read / write load (or read-heavy / write-heavy)</li>
  <li>latency requirements</li>
  <li>geolocation</li>
  <li>uptime requirements</li>
  <li>scale</li>
</ul>

<p>Then anything more domain-specific to the problem at hand.</p>

<h3 id="napkin-math-around-requirements">napkin math around requirements</h3>

<p>“napkin math”, where you try to do ballpark estimates of the numerical
requirements or properties of your system, are a must for system design, and in
interviews I found myself getting into doing really complex numerical
conversions that I honestly couldn’t do in my head. A lot of it had to do with
the base 2 calculations vs the base 10 calculations and interchanging them.</p>

<p>To help with this, I have the following framework:</p>

<ol>
  <li>Calculate the data storage in bytes (base 2 calculation)</li>
  <li>Do estimates in multiples of ten if possible (e.g. 500k users). Write that out as powers of 10 (e.g. 5*10^6)</li>
  <li>Estimate requests per second by estimating the total activity per <em>day</em>, divide by 10^5, then multiple by 1.1.</li>
</ol>

<p>The third one requires a bit more explanation. Basically:</p>

<ul>
  <li>There are 86,400 seconds in a day.</li>
  <li>100,000 / 86,400 ~ 1.15. Which we can further ballbark to 1.1 or 1.2.</li>
</ul>

<p>This, combined with using multiples of ten for users, makes the math really
really easy. 1.1 is very easy to mentally perform as well.</p>

<h3 id="figure-out-data-storage">Figure out data storage</h3>

<p>Question: which two of the CAP theorem are most critical for the storage of the
data?</p>

<p>Every systems design problem has some sort of data that you need to store. On
that note, I’ve seen a couple different flavors of “data storage” discussions:</p>

<ol>
  <li>Thinking about the design patterns to use based on the properties of your
business problem.</li>
  <li>Discussing real-life databases or categories that you might use (e.g. SQL vs
document database).</li>
</ol>

<p>To hedge your bets, I think you should start with 1 where you think about the
data structures, and know how to map them to the database that implement.</p>

<table>
  <thead>
    <tr>
      <th>design pattern or data structure</th>
      <th>database</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>hashmap</td>
      <td>key-value store</td>
    </tr>
    <tr>
      <td>b-tree index</td>
      <td>indexed database</td>
    </tr>
  </tbody>
</table>

<p>A common data structure is to:</p>

<ol>
  <li>have a dataset split into multiple “shards”</li>
  <li>replicate each shard into replica sets (each replica set could have a single master via paxos quorum)</li>
  <li>have the metadata of each replica server started in a config server.</li>
  <li>have the config server heartbeat into replicas to do coordination (e.g. disenroll)</li>
  <li>the master has a write-ahead-log to store the transactions, ensure ordering.
Add a checksum to ensure the operation completes. Otherwise query master for
correct row data and re-verify.</li>
  <li>replicas replicate the WAL and perform subsqeuent writes.</li>
</ol>

<pre><code class="language-mermaid">graph TD
CS[Config Server] --&gt; RS1[Replica Set 1]
CS --&gt; RS2[Replica Set 2]
CS --&gt; RS3[Replica Set 3]
subgraph "Shard 1"
RS1 --&gt; M1[Master]
RS1 --&gt; S1A[Replica]
RS1 --&gt; S1B[Replica]
end
subgraph "Shard 2"
RS2 --&gt; M2[Master]
RS2 --&gt; S2A[Replica]
RS2 --&gt; S2B[Replica]
end
subgraph "Shard 3"
RS3 --&gt; M3[Master]
RS3 --&gt; S3A[Replica]
RS3 --&gt; S3B[Replica]
end
%% Heartbeat connections
CS -.-&gt;|Heartbeat| M1
CS -.-&gt;|Heartbeat| M2
CS -.-&gt;|Heartbeat| M3
CS -.-&gt;|Heartbeat| S1A
CS -.-&gt;|Heartbeat| S1B
CS -.-&gt;|Heartbeat| S2A
CS -.-&gt;|Heartbeat| S2B
CS -.-&gt;|Heartbeat| S3A
CS -.-&gt;|Heartbeat| S3B
</code></pre>

<h1 id="organizing-and-communicating-in-a-design-interview">Organizing and communicating in a design interview</h1>

<ul>
  <li>have three different text sections: this will help communicate to the
interviewer what assumptions you have made.</li>
  <li>requirements/</li>
  <li>API?</li>
  <li>user journey?</li>
</ul>

<h1 id="deeper-notes">Deeper notes</h1>

<h2 id="design-patterns-per-use-case">Design patterns per use case</h2>

<table>
  <thead>
    <tr>
      <th>problem</th>
      <th>solution</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>thundering herd / high number of requests</td>
      <td>exponential backoff</td>
    </tr>
    <tr>
      <td>generating garbage / corruption over time</td>
      <td>garbage collection process</td>
    </tr>
    <tr>
      <td>ensuring nodes are available</td>
      <td>heartbeating</td>
    </tr>
    <tr>
      <td>high availability with state</td>
      <td>master with a quorum (e.g. via paxos)</td>
    </tr>
    <tr>
      <td>distributed consensus</td>
      <td>paxos</td>
    </tr>
  </tbody>
</table>

<h2 id="design-patterns-when-considering-data-storage">Design Patterns when considering data storage</h2>

<h3 id="choosing-the-partitioning-key-for-data-locality">Choosing the partitioning key for data locality.</h3>

<p>Similar to Google bigtable, one could choose the partition key for rows by some
structure that ensures that for most queries, the query only has to send a request
subset of nodes, rather than all of them. This:</p>

<ul>
  <li>reduces the query load on all nodes.</li>
  <li>also reduces the latency of the request, since fewer nodes have to respond.</li>
</ul>

<h3 id="paxos">Paxos</h3>

<p>The Paxos algorithm can be used to achieve consensus across multiple separate
hosts.</p>

<p>The Google Chubby lock server can handle ~20 req / s of content updates, with
a significant number of reads and KeepAlives.</p>

<h3 id="write-ahead-log">write-ahead log</h3>

<p>A write-ahead log can be used to ensure durability in the operations that are
performed. A transaction can be acknowledged after it’s committed into a
write-ahead log.</p>

<h3 id="expontial-backoff">Expontial backoff</h3>

<h3 id="read-locks-and-write-locks">Read locks and write locks</h3>

<p>Read locks (aka shared locks) are a common database pattern which allows
multiple readers on a row of data, preventing mutation. It prevents subsequent
writes until all readers have released a lock.</p>

<p>Write locks are an exclusive lock on the row, blocking both reads and writes.</p>]]></content><author><name></name></author><category term="coding" /><summary type="html"><![CDATA[Distributed System Design Interview]]></summary></entry><entry><title type="html">VS Code Debug Json Cheatsheet</title><link href="https://y.tsutsumi.io/vscode-debug-cheatsheet" rel="alternate" type="text/html" title="VS Code Debug Json Cheatsheet" /><published>2026-02-02T07:00:00+00:00</published><updated>2026-02-02T07:00:00+00:00</updated><id>https://y.tsutsumi.io/vscode-launch-debug</id><content type="html" xml:base="https://y.tsutsumi.io/vscode-debug-cheatsheet"><![CDATA[<p>With all of the new vscode-based editors out there (Cursor, Antigravity, and of course VS Code itself), I find myself having to reconfigure editors all the time.</p>

<p>Here are some snippets of vs code debug configuration to easily copy-paste as needed.</p>

<h2 id="go">Go</h2>

<h3 id="running-a-command-line-tool">Running a command line tool</h3>

<p>I’ve used this for when I’m trying to debug <a href="https://github.com/aep-dev/aepcli">aepcli</a></p>

<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="w">       </span><span class="p">{</span><span class="w">
          </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Debug Command Line App"</span><span class="p">,</span><span class="w">
            </span><span class="nl">"type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"go"</span><span class="p">,</span><span class="w">
            </span><span class="nl">"request"</span><span class="p">:</span><span class="w"> </span><span class="s2">"launch"</span><span class="p">,</span><span class="w">
            </span><span class="nl">"mode"</span><span class="p">:</span><span class="w"> </span><span class="s2">"auto"</span><span class="p">,</span><span class="w">
            </span><span class="err">//</span><span class="w"> </span><span class="err">or</span><span class="w"> </span><span class="err">path</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">command</span><span class="w"> </span><span class="err">line</span><span class="w">
            </span><span class="nl">"program"</span><span class="p">:</span><span class="w"> </span><span class="s2">"${workspaceFolder}/cmd/aepcli"</span><span class="p">,</span><span class="w">
           </span><span class="nl">"args"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
               </span><span class="s2">"http://localhost:8000/openapi.json"</span><span class="w">
            </span><span class="p">]</span><span class="w">
        </span><span class="p">}</span><span class="w">
</span></code></pre></div></div>]]></content><author><name></name></author><summary type="html"><![CDATA[With all of the new vscode-based editors out there (Cursor, Antigravity, and of course VS Code itself), I find myself having to reconfigure editors all the time.]]></summary></entry><entry><title type="html">realfood.gov</title><link href="https://y.tsutsumi.io/realfood-gov" rel="alternate" type="text/html" title="realfood.gov" /><published>2026-01-19T07:00:00+00:00</published><updated>2026-01-19T07:00:00+00:00</updated><id>https://y.tsutsumi.io/realfood-gov</id><content type="html" xml:base="https://y.tsutsumi.io/realfood-gov"><![CDATA[<p><a href="https://realfood.gov/">realfood.gov</a> looks to overhaul decades-long dietary guidelines for Americans.</p>

<p>Although, like many food pyramids before it, there is a very strong lobbyist influence on this guidance (The <a href="https://www.nytimes.com/2026/01/07/well/rfk-jr-food-pyramid-nutrition-guidelines-protein.html">New York Times covers it well</a>), my thoughts are primarily on the guidance itself, as a health-conscious person who has done some reading on nutrition from folks like Dr. Peter Attia.</p>

<h2 id="summary">Summary</h2>

<p>My thoughts on the guidance is overall positive. But the changes I would make are:</p>

<ul>
  <li>Limit saturated fat as much as possible, similar to added sugars. Eating more saturated fat directly correlated to a higher increase in apob and cardiovascular disease.</li>
  <li>Eliminate the conversation around whole foods vs processed foods. Acknowledge that most processed foods are nutritionally unhealthy, but focus on the nutritional content and not the processed nature.</li>
  <li>Continue to focus on a plant-based diet. This is primarily because the amount of water and resources required to produce plant-based foods is significantly lower than the resources required for meat or most dairy. But meat is generally high in saturated fat so it’s best to avoid that anyway.</li>
  <li>Positive changes: increased protein and fiber recommendations, reduction of high-carb foods in the pyramid.</li>
</ul>

<h2 id="the-details">The details</h2>

<h3 id="pros">Pros</h3>

<h4 id="increased-protein-intake">Increased protein intake</h4>

<p>Generally, higher protein intake enables the body to build more muscle, although resistance training is much more important. This is especially critical in the later years where it’s incredibly hard to build muscles and strength.</p>

<h4 id="reduced-bread--simple-carb-intake">Reduced bread / simple carb intake</h4>

<p>It’s surprising that the original recommendations in the food pyramid were so heavy on carbs. Although ensuring you have enough energy throughout the day is a good thing, a carb-heavy diet can result in a lot of insulin to digest the glucose that is produced as a result, and can contribute to diabetes.</p>

<p>There’s definitely healthier carbs in whole grains, but it is better to balance more protein or fats as well.</p>

<h4 id="eliminating-bias-around-fats">Eliminating bias around fats</h4>

<p>Unsaturated fats help improve satiety, and also reduce the number of carbs required to reach the daily caloric recommendation. I still think low-fat foods have their place, but as long as the fat is unsaturated, it is generally less causal to high ldl / apob, and is a good macronutrient to include in the mix.</p>

<h3 id="mixed">Mixed</h3>

<h4 id="well-intentioned-but-incorrect-focus-against-processed-foods">Well-intentioned, but incorrect, focus against processed foods.</h4>

<p>The document focuses heavily on the concept of “real” foods, advocating against highly processed food.</p>

<p>If you’re not willing to think about the nutritional content of food a bit more deeply, then I think it’s general it’s good guidance: most highly processed foods have extremely pool nutritional content, with high saturated fat, carbs, and low protein.</p>

<p>However, this pairing ends up being a confounder: there just are not very many healthy but higly processed foods, and so most studies on highly processed food end up showing results very similar to studies on foods high in saturated fat, carbs, and low protein.</p>

<p>Although it’s not a study, an anecdotal counterexample is <a href="https://huel.com/">huel</a>: definitely highly processed if you look at the ingredient list (dozens on ingredients, micronutrient blends), but many who run labs on the huel subreddit after multiple months exclusively on huel report lost fat, maintain muscles, and generally better labs such as blood pressure (<a href="https://www.reddit.com/r/Huel/comments/168xtf5/laboratory_results_4_months_on_100_huel_powder/">1</a>, <a href="https://discuss.huel.com/t/12-months-huel-only-diet-results/15750">2</a>).</p>

<p>Another anecdotal example, is myself: my diet is primarily processed foods: I eat keto bread, protein powders, artificial sweeteners, huel, and sometimes slimfast (although when I eat “real food” it’s generally roasted whole vegetables).</p>

<p>However, my labs for the past 2-3 years on this diet have only improved over time:</p>

<ul>
  <li>triclycerides are 50.</li>
  <li>ldlc is 80 or lower.</li>
  <li>body fat is the lowest it’s ever been.</li>
  <li>blood pressure is consistently below 120/80: ten years ago I was generally right on the borderline.</li>
</ul>

<p>Which indicates to me the concerns are largely about the <em>overall nutritional content</em>, not about the processed nature of the food.</p>

<p>Against, it’s generally good guidance, but is pointing at the wrong cause.</p>

<h3 id="cons">Cons</h3>

<h4 id="disregarding-saturated-fat">Disregarding saturated fat</h4>

<p>The guidance effectively eliminates any guidance around saturated fat. Multiple studies at this point have proven the correlation between high saturated fat intake and apob (the lipoprotein that primarily gets caught in the subendothelial space of the arterial wall, contributing to ASCVD), so to ignore it’s guidance in my opinion, is mistaken.</p>

<p>This is probably the guidance that I would be the most concerned about. It’s probably also not a coincidence that, since many meats and dairy tend to be much higher in saturated fat content than plant-based options, ignoring saturated fat tends to benefit the meat industry the most.</p>

<p>The only guidance is the existing one which suggests limiting saturated fat intake to 10% of calories or less a day: in a 2000 kcal diet, that’s 200 calories, or roughly 22 grams. It’s less reinforced in the new narrative.</p>

<p>I’d replace this guidance with limiting saturated fat to near-zero as much as possible, similar to added sugars.</p>

<h4 id="eating-more-meat-over-a-plant-based-diet">Eating more meat over a plant-based diet</h4>

<p>This particular con is less about the nutritional content: farming meat requires significantly more water and land than growing plants. I don’t think it’s really tenable to continue to increase the usage per capita for food, especially as the population keeps growing.</p>

<h2 id="conclusions">Conclusions</h2>

<p>I left my summary at the beginning, but my parting thoughts: It’s unfortunate that science, like anything human, has to be so strongly biased. And I see parts of this new guidance that has significant cons.</p>

<p>But that said, the old guidance was not perfect either. I hope the next version (no doubt swiftly replaced by the next administration run by a Democrat) corrects some of the issues I’ve raised above, without regressing on some of the improvements.</p>]]></content><author><name></name></author><category term="reflection" /><summary type="html"><![CDATA[realfood.gov looks to overhaul decades-long dietary guidelines for Americans.]]></summary></entry><entry><title type="html">Review on the 2025 Framework 13</title><link href="https://y.tsutsumi.io/framework-13-2025" rel="alternate" type="text/html" title="Review on the 2025 Framework 13" /><published>2026-01-10T07:00:00+00:00</published><updated>2026-01-10T07:00:00+00:00</updated><id>https://y.tsutsumi.io/framework-laptop</id><content type="html" xml:base="https://y.tsutsumi.io/framework-13-2025"><![CDATA[<p><img src="../../assets/2026-01-10-framework-laptop.png" alt="The framework laptop 13" /></p>

<p>I finally got a <a href="https://frame.work/">Framework</a> 13 laptop! This is something I’ve wanted for several years, ever since they started shipping their laptops in 2021.</p>

<h1 id="a-quick-intro-to-framework">A quick intro to framework</h1>

<p>Framework focuses on repairable, but also modular, devices. They have produced laptops in the 12,13, and 16 inch form factor, as well as a desktop. Some of the features that are specific to Framework devices include:</p>

<p>“swappable I/O”: effectively there are slots of USB-C ports that have a square notch in them, paired with adapters to various common ports like USB-A, DisplayPort, or HDMI. The result is the user-facing ports are effectively changeable, to some extent on the fly: it’s hard to change the port in, say, a second, but if you had 20-30 seconds I think you could change them out.</p>

<p>Repairable / interchangeable parts: there is a whole Framework store with replacement parts for everything from batteries to webcams to keyboards. You can change things like the bezel color around your monitor, or for the desktop they have square chicklets that you can replace with your own flair.</p>

<h1 id="what-motivated-me">What motivated me</h1>

<p>So why the wait? Well, at the time I had a working laptop from 2018 that was working perfectly okay. It still even works okay (battery has only degraded to maybe 80% or so at most).</p>

<p>So in 2025, there’s a couple things that have changed for me:</p>

<ul>
  <li>The availablility of RISC-V and ARM boards especially.</li>
  <li>The AI HX 370 processor, released this year.</li>
</ul>

<h1 id="impressions-and-thoughts">Impressions and thoughts</h1>

<p>Overall, I’m very impressed and happy with my 13. I bought the “DIY” version, which require some self-installation of the RAM, hard drive, but I preferred that as it gave me time to quickly get a more visceral sense of the device.</p>

<p>It’s also my first laptop with the 3:2 display, which runs taller than the ratios typically seen in laptops these days. I do feel like the extra vertical real estate is noticeable, and helps a bit with coding or text-based tasks.</p>

<p>I’m also happy with the performance: the device feels extremely snappy with Ubuntu 25.10 and the standard Gnome display manager. Videos and tabs load quickly. Aside from games there is no percievable latency as far as processing is concerned.</p>

<p>Of course, my frames of reference are an M2 MacBook Pro from 2022, and an HP Spectre X360 from 2018, but compared to both of these my laptop feels much smoother.</p>

<p>Battery life also seems to be holding quite well - I charge pretty aggresively regardless, but even for heavy workloads like gaming with 100% GPU utilization I’m looking at 1-2 hours. If it’s more lightweight work than that I have not yet thought about battery at all, but I’ll probably do some more emperical benchmarks soon. <code class="language-plaintext highlighter-rouge">powertop</code> reports a draw of 7 watts or so at this time, so I’m guessing for light work I can get about 7 or 8 hours.</p>

<p>The build of the laptop is also very nice - The aluminum feels very similar to a Macbook, but the device feels much lighter than the M2 PRO. This feels like a “premium” laptop to me.</p>

<h2 id="availability-for-risc-v-and-arm-boards">Availability for RISC-V and ARM boards</h2>

<p>The amazing thing about Framework devices are not just repairability - it’s the modularity as well.</p>

<p>The 13 form factor in particular has been so successful that other vendors have started producing compatible boards:</p>

<ul>
  <li><a href="https://frame.work/blog/risc-v-mainboard-for-framework-laptop-13-is-now-available">DeepComputing’s RISC-V board</a></li>
  <li><a href="https://metacomputing.io/products/metacomputing-aipc">MetaComputing’s ARM-based board</a>.</li>
</ul>

<p>Improving support of alternative processors seems like an amazing hobby project. Especially ARM with the recent announcement of it’s integration of <a href="https://fex-emu.com/">FEX</a>-based emulation of X86 to play steam games on ARM devices.</p>

<h2 id="the-ai-9-hx-370">The AI 9 HX 370</h2>

<p>I’ve been angling for the Framework 16 in particular because of the higher TDP, ensuring that I can use the maximum potential of any hardware I have for the device. This was also to ensure I could upgrade the GPU as more resource-intensive games are released.</p>

<p>However, the AI HX 390 seems to have a pretty amazing iGPU, with the Radeon 890m. In my benchmarks I’m getting good <em>enough</em> framerates to accept it on the off change I’m gaming on the go. Note that these are using single-channel memory so the GPU can likely perform much, much better:</p>

<ul>
  <li>30-40 FPS with Cyberpunk 2077.</li>
  <li>30 FPS with Balder’s Gate 3.</li>
  <li>30 FPS with Expedition 33 (with some really low resolution settings).</li>
</ul>

<p>Some more lightweight games like Dispatch run at 60fps, just as good as my desktop.</p>

<h3 id="for-ai">For AI</h3>

<p>I’ve recently started dabling with ML model architecture and training. My <a href="https://github.com/toumorokoshi/yft-ml-sandbox/tree/main/alexnet">reproduction of the AlexNet paper</a> on the Imagenette dataset trains 60 epochs in about 3-4 hours. This isn’t nearly as fast as the 30 minutes on the <a href="https://docs.cloud.google.com/compute/docs/gpus?hl=ja#l4-gpus">L4 GPUs available through Google Cloud</a> that I normally test against, but it’s still a decent showing, especially limited at the 30w TDP.</p>

<h2 id="mistake-buying-single-channel-memory">Mistake: buying single-channel memory</h2>

<p>Honestly I wasn’t thinking straight on this one. I bought 1 32GB sodimm of RAM, thinking that I’ll buy a second one when the RAM prices go down (the current AI trend has caused RAM prices to more than triple). But the single sodimm really hurts any memorybound operations. In particular AI training/inference and gaming: two things I’m doing a fair bit right now. For gaming I’ve read benchmarks that dual dimms can up the framerate of some games by about 10-20%.</p>

<p>Even if I buy a second one, I believe I will end up with a higher amount of ram anyway (probably maxing out at 128GB if possible, past the theoretically supported 96GB for the AI 9 HX 370).</p>

<p>I’m considering buying a second smaller DIMM the first 16GB could be dual-channel, but then I would rely on how functional the processor is load-balancing across uneven sizes of RAM.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[]]></summary></entry></feed>