Goodbye Pelican, Hello WordPress!

First of all, sorry for all of those who came here through Google, and were redirected to the homepage. I tried my best to preserve URLs but I couldn’t figure out a great way to do that.

For recurring readers, you may have noticed the site has changed. That’s because this blog is now powered by WordPress!

I’m generally not a fan of heavy-handed systems, but the user experience eventually convinced me this was the right route. I’m now using WordPress, and even the paid edition.

Why I chose WordPress

WordPress as a platform provides a lot of tools to simplify the blog authoring experience. With Pelican, my blog writing experience was the following:

  1. Create a new file in restructured text, add some boilerplate
  2. adding images requires copying the image to the images/ directory, then adding the image link by hand into the file.
  3. re-rendering the post over and over again.
  4. calling the execute script, which handles publishing the files to Github.

The disadvantages of the platform were:

  1. The iteration was slow, including the ability to quickly add and manipulate images.
  2. The experience was on desktop only, and Git to boot, so I had to have enough time to clone (or pull or push) a git repository and fire up a text editor. Not great for just jotting down a quick note.

WordPress reduces this whole process, and support both mobile and desktop:

  1. Create a new post in the UI
  2. Add images by just selecting the file. I can do basic modifications like crop and rotate directly in wordpress.
  3. click “publish”

Overall, the reduced friction has let me write posts more frequently, as well as use it as a bed for notes in the meantime.

There are also other benefits:

  • several themes available so I can quickly style.
  • mobile app
  • SEO friendly

And probably features I’m sure to discover as well.

So, welcome to my new WordPress blog!

Crafting pelican-export in 6 hours.

Over the past two or three days, I spent some deep work time on writing pelican-export, a tool to export posts from the pelican static blog creator to WordPress (with some easy hooks to add more). Overall I was happy with the project, not only because it was successful, but because I was able to get to something complete in a pretty short period of time: 6 hours. Reflecting, I owe this to the techniques I’ve learned to prototype quickly.

Here’s a timeline of how I iterated, with some analysis.

[20 minutes] Finding Prior Art

Before I start any project, I try to at least do a few quick web searches to see if what I want already exist. Searching for “pelican to wordpress” pulled up this blog post:

https://code.zoia.org/2016/11/29/migrating-from-pelican-to-wordpress/

Which pointed at a git repo:

https://github.com/robertozoia/pelican-to-wordpress

Fantastic! Something exists that I can use. Even if it doesn’t work off the bat, I can probably fix it, use it, and be on my way.

[60m] Trying to use pelican-to-wordpress

I started by cloning the repo, and looking through the code. From here I got some great ideas to quickly build this integration (e.g. discovering the xmlrpc-wordpress library). Unfortunately the code only supported markdown (mine are in restructuredtext), and there were a few things I wasn’t a fan of (constants including password in a file), so I decided to start doing some light refactoring.

I started organizing things into a package structure, and tried to use the Pelican Python package itself to do things like read the file contents (saves me the need to parse the text myself). While looking for those docs, I stumbled upon some issues in the pelican repository, suggesting that for exporting, one would want to write a plugin:

https://github.com/getpelican/pelican/issues/2143

At this point, I decided to explore plugins.

[60m] Scaffolding and plugin structure.

Looking through the plugin docs, it seemed much easier than me trying to read in the pelican posts myself[ I had limited success with instantiating a pelican reader object directly, as it expects specific configuration variables.

So I started authoring a real package. Copying in the package scaffolding like setup.py from another repo, I added the minimum integration I needed to actually install the plugin into pelican and run it.

[60m] Rapid iteration with pdb.

At that point, I added a pdb statement into the integration, so I could quickly look at the data structures. Using that I crafted the code to migrate post formats in a few minutes:

    def process_post(self, content) -> Optional[WordPressPost]:
        """Create a wordpress post based on pelican content"""
        if content.status == "draft":
            return None
        post = WordPressPost()
        post.title = content.title
        post.slug = content.slug
        post.content = content.content
        # this conversion is required, as pelican uses a SafeDateTime
        # that python-wordpress-xmlrpc doesn't recognize as a valid date.
        post.date = datetime.fromisoformat(content.date.isoformat())
        post.term_names = {
            "category": [content.category.name],
        }
        if hasattr(content, "tags"):
            post.term_names["post_tag"] = [tag.name for tag in content.tags]
        return post

I added a simlar pdb statement to the “finalized” pelican signal, and tested the client with hard-coded values. I was done as far as functionality was concerned!

[180m] Code cleanup and publishing

The bulk of my time after that was just smaller cleanup that I wanted to do from a code hygiene standpoint. Things like:

  • [70m] making the wordpress integration and interface, so it’s easy to hook in other exporters.
  • [40m] adding a configuration pattern to enable hooking in other exporters.
  • [10m] renaming the repo to it’s final name of pelican-export
  • [30m] adding readme and documentation.
  • [30m] publishing the package to pypi.

This was half of my time! Interesting how much time is spent just ensuring the right structure and practices for the long term.

Takeaways

I took every shortcut in my book to arrive at something functional, as quickly as I could. Techniques that saved me tons of time were:

  • Looking for prior art. Brainstorming how to do the work myself would have meant investigating potential avenues and evaluating how long it would take. Having an existing example, even if it didn’t work for me, helped me ramp up of the problem quickly.
  • Throwing code away. I had a significant amount of modified code in my forked exporter. But continuing that route would involve a significant investment in hacking and understanding the pelican library. Seeing that the plugin route existed, and testing it out, saved me several hours of time trying to hack and interface to private pelican APIs.
  • Using pdb to live write code. In Python especially, there’s no replacement to just introspecting and trying things. Authoring just enough code to integrate as a plugin to give me a fast feedback loop, and throwing a pdb statement to quickly learn the data structure, helped me find the ideal structure in about 10 minutes.

There was also a fair bit of Python expertise that I used to drive down the coding time, but what’s interesting is the biggest contributors to time savings were process: knowing the tricks on taking the right code approach, and iterating quickly, helped me get this done in effectively a single work day.

Tech Notes: Debugging LLVM + Rust

I’m working on a programming language, writing the compiler in rust. I’m stuck at this point from a segfault that occurs with the following IR (generated by my compiler):

; ModuleID = 'main'
source_filename = "main"

define void @main() {
entry:
  %result = call i64 @fib(i64 1)
}

define i64 @fib(i64) {
entry:
  %alloca = alloca i64
  store i64 %0, i64* %alloca
  %load = load i64, i64* %alloca
  switch i64 %load, label %switchcomplete [
    i64 0, label %case
    i64 1, label %case1
  ]

switchcomplete:                                   ; preds = %case1, %entry, %case
  %load2 = load i64, i64* %alloca
  %binop = sub i64 %load2, 1
  %result = call i64 @fib(i64 %binop)
  %load3 = load i64, i64* %alloca
  %binop4 = sub i64 %load3, 2
  %result5 = call i64 @fib(i64 %binop4)
  %binop6 = add i64 %result, %result5
  ret i64 %binop6

case:                                             ; preds = %entry
  ret i64 0
  br label %switchcomplete

case1:                                            ; preds = %entry
  ret i64 1
  br label %switchcomplete

This segfaults whenever I run my compiler, which currently compiles the code and immediately executed it in LLVM’s MCJIT.

Mystery SIGSEGV

Whenever I run my code in my debugger, I find that I have a segfault which doesn’t occur (at least at the same time) as when I run my app on the command line.

VS Code’s debugger returns:

so something is happening during the FPPassManager. Apparently the FPPassManager is what handles generating code for functions (read in the source code)

getNumSuccessors was a bit nebulous for me… what does this function actually do? I wasn’t familiar with the term “successor”: it must be something custom to LLVM. Some Googling finds: http://llvm.org/docs/ProgrammersManual.html#iterating-over-predecessors-successors-of-blocks

So I guess successor is referring to the number of statements that immediately follow the existing statement. getNumSuccessors in core.h of llvm specifies there are function calls for a terminator. So what precisely is a terminator?

Looking through the LLVM source code again, it’s the classification for instructions that will terminate a BasicBlock. The list from LLVM9 looks like:

  /* Terminator Instructions */
  LLVMRet            = 1,
  LLVMBr             = 2,
  LLVMSwitch         = 3,
  LLVMIndirectBr     = 4,
  LLVMInvoke         = 5,
  /* removed 6 due to API changes */

Looking at the traceback, this is specifically occurring in the updatePostDominatedByUnreachable. The source code for that is:

/// Add \p BB to PostDominatedByUnreachable set if applicable.
void
BranchProbabilityInfo::updatePostDominatedByUnreachable(const BasicBlock *BB) {
  const Instruction *TI = BB->getTerminator();
  if (TI->getNumSuccessors() == 0) {
    if (isa<UnreachableInst>(TI) ||
        // If this block is terminated by a call to
        // @llvm.experimental.deoptimize then treat it like an unreachable since
        // the @llvm.experimental.deoptimize call is expected to practically
        // never execute.
        BB->getTerminatingDeoptimizeCall())
      PostDominatedByUnreachable.insert(BB);
    return;
  }

The actual errors occurs on the first instruction of the assembly instruction:

; id = {0x00012806}, range = [0x000000000093fbb0-0x000000000093fc3b), name="llvm::TerminatorInst::getNumSuccessors() const", mangled="_ZNK4llvm14TerminatorInst16getNumSuccessorsEv"
; Source location: unknown
555555E93BB0: 0F B6 47 10                movzbl 0x10(%rdi), %eax
555555E93BB4: 48 8D 15 81 3B D5 01       leaq   0x1d53b81(%rip), %rdx
555555E93BBB: 83 E8 18                   subl   $0x

I can’t read assembler very well. But since this is a method, most likely the first instruction has to do with loading the current object into memory. Most likely then, getNumSuccessors is receiving a pointer to something it doesn’t expect. Most likely this is an NPE.

My hunch now is I have a basic block without a terminator statement, causing the JIT pass to fail.

There was a missing return statement on the main function. Adding that didn’t change anything.

Fixing the blocks to only have terminators did indeed fix the issue! Ultimately figuring out that a validator existed, and heeding it’s error messages lead to the solution.

https://github.com/toumorokoshi/disp/commit/1591788b8fc1871f1211c8ae6114e4d9a3fdf397

Tech Notes: Updating Unity for Cerebrawl

I’m interested in starting a habit of note taking while I take on some pretty difficult tasks, maybe as a learning experience for myself or others if they find it valuable.

Today, I’ll be tackling Updating Cerebrawl’s Unity from 5.6 to 2018.3.

This is actually pretty late in the journey: I’ve got a branch of 2018.3 working, I just need to figure out how to reconcile that with the month-and-a-half’s worth of changes that were made in the meantime.

My upgrade path thus far has been a combination of the following tools:

  • vscode, when I need to go look at live coude
  • sourcetree, when I need to do some fine-grained change picking
  • Unity, to see if the things runs.

Errors Again

Pulling up my branch again, there’s errors around the lack of a TMP_PRO namespace. It seems that TextMeshProUGUI doesn’t exist for TextMeshPro 1.3. Something to look into later, but for now commenting that out should be fine.

Next ran into a duplicate tk2dSkin.dll. It looks like that now goes in the “tk2d” directory, rather than “TK2DROOT”. So just delete the old one.

Cherry-Picking the New Changes

We had to revert the 2018 unity changes previously. Last time I tried to merge in the master branch (I use git-svn so it’s effectively the SVN tree), git I think got confused because I reverted a bunch of the changes I had done, breaking everything and requiring me to apply those changes again.

This time, I should only pull in the changed made after that point. I created another branch to keep my working changes from being broken and lost in history when I merge in other changes.

I can use git cherry-pick to specifically pick up diffs in that version range:

git cherry-pick b813563…5646829

Ran into multiple errors cherry-picking. Resolution is to pick up incoming changes again and again (these are Unity asset files so not ones I needed to touch to update).

Once those were done, I switched back to the Unity editor, let it load again.

It Works!

Huzzah! For the most part everything has migrated over. The biggest challenge on this one was upgrading tk2d toolkit, which was broken by newer Unity versions.

Merging Changes In

I hit another snag trying to merge files in. Git svn attempted to rebase my changes on top of the existing branch, which doesn’t work really well as it tries to merge diffs again.

My best hope is to basically construct a changeset that is all of the changes I made on what’s in SVN today. To do so I run:

git svn fetch
git checkout master
git reset --hard git-svn 
git clean -xdf
git checkout feature/merge-unity-2018
git reset --soft master
git commit

Finally a git svn push and all the changes have been made!

From Emacs to Atom

I want to start this post by stating I have nothing but respect, admiration, and love for the Emacs community. Emacs’ extensibility, community packages, and it’s choice to effectively be an editor built on a lisp VM is amazing, and anyone choosing Emacs as an editor is investing in something that can grow with your needs.

Nevertheless, there are compelling reasons to switch to Atom. I am saying goodbye to Emacs, and have started using Atom as my main text editor.

My History with Emacs

I learned about Emacs during my college years (roughly 2008), when I happened to attend a house party for a family friend. The friend was a software developer who retired many years ago, but upon learning of my interest in software, he began to regale me with story after story of how much he does in Emacs.

“I check my e-mail with Emacs.”

“I built a program that opens my garage door from Emacs.”

“I share the same editor across multiple computers using the remote Emacs client.”

In all honesty I wasn’t really impressed by the idea of a program that you use to literally do everything, but it seemed like a great kernel for a text editor. I was using vim at the time and what I always lamented at (which I know others would say it’s vim’s greatest strength) was the fact that it could only be used to modify text. When I write code, I do so much more than write the code itself. I wanted an environment that made executing additional tasks seamless:

  • Interact with version control (git push / pull, commit, add)
  • Code search
  • Running command line scripts
  • Running a REPL and unit tests

Thus I dove into Emacs. The built-in terminal emulator, the ability to build whole programs in a single .el file and load them up, adding significant functionality to the editor, meant a lot of my needs were met quickly, and without significant effort. This became even easier with the release of Emacs 24, which included a built-in library to retrieve and install third-party packages, reducing the need to copy and paste.

I continued pretty happily for several years. I made a couple videos showcasing my Emacs setup, published my dotfiles, and wrote some tutorials as well.

Enter Atom

In 2015, Atom was released. An editor that was inspired by the flexibility of Emacs, but was built on HTML5 technologies. Using web technologies to build out massive native applications is not an advantage in every respect, but there are strong wins in some important areas.

A Powerful UI Framework

HTML, CSS, and Javascript have all grown to support the needs of the massive range of uses that websites are now used for, and the result from tackling such a large breadth is a powerful, general system for laying out various windows, styling them appropriately. Combine that with highly optimized runtimes to render said windows (web browsers) and you have a system that is not just developer-friendly, but also user-friendly.

Large Pool of Experienced Developers

A large portion of software engineers are web developers, and thus work in web technologies. The ability to transfer even some of this expertise as one is extending their editor removes a large chunk of the learning curve.

Impressed, But Not Sold

Atom was conceptually the editor I always wanted: the power of Emacs, a flexible UI framework, and a core built on technologies that I knew well and could contribute to. If I was starting from scratch, I may have chosen Atom, even when it just came out of beta.

However, I wasn’t starting from scratch. There was years of expertise in elisp, finding the right packages, learning to use them, and familiarizing myself with keybindings and the Emacs way of doing things. It didn’t make sense to throw those out the window for a nascent editor.

The Catalyst: Atom IDE and Language Server Integration

Since Atom came out, there was another text editor that entered the scene: VSCode. Similar in design to Atom, VSCode took a more opinionated approach to how an editor should be organized, and what tools to use (in the vein of visual studio). The more open world of Atom wasn’t a first priority (for example, VSCode did not provide support for more than three text windows at a time until recently).

However, VSCode did directly lead to the creation of the language server protocol, which enables any text editor to take advantage of IDE-like features, as long as they build an interface to a JSON-RPC based API.

When Atom implemented it’s language client, it was impressive, and it made me want to try Atom. But making the switch would require me to port all of my existing tools to find equivalents, and most likely learn a new set of keybindings. I already had a lot of that in Emacs. However, there was a final factor that really made me switch.

Community Critical Mass

For almost any tool or program, you’ll find one that is better in almost every significant way, but yet has not taken off. As much as we’d like to believe software engineering is a purely merit-based field, the reality is it depends on socio-economic factors as much as every other discipline. Market and mind-share matters.

The most impressive part of the language server protocol is not that it was built, it’s who built it. Facebook was a major contributor, teaming up with Github to build a real IDE experience for Atom.

Facebook’s business practices aside, they have a giant and talented engineer base. With Facebook engineers supporting a plugin like the Atom IDE, there’s a strong chance that you will see that integration improved and supported for years to come. And Atom is also a blessed project from Github.

I love Emacs, but it’s primarily supported by a volunteer base, who have other fulltime jobs. It’s very difficult trying to get a group of developers to implement something like language server support, and maintain and contribute back for years to come.

And the active community is larger around Atom. As of October 2018, here’s the counts of packages on the major package repositories per editor:

Unfortunately, Emacs does not have the development community in the same way Atom and VSCode can. That’s a conversation worth diving into, but it doesn’t change the state of the world today.

Migrating to Atom

So, I migrated my Emacs setup to Atom. Since I was a relatively late adopter a majority of my desired features were already a part of the editor, or were available as an extension.

I don’t think it’s valuable to dive into exactly what my setup looks like, but if you’d like to learn more, you can check out an Atom plugin I’m working on:

https://atom.io/packages/chimera

I am not using Atom 100% of the time, and I haven’t opened Emacs in about a year. The migration process took a couple of weeks.

The Future

Today, I have a lot invested in Atom, and I like my experience. Language server integration was a missing pain point, and that ecosystem (along with Atom’s integrations with it) is getting better every day.

The biggest lost I faced with Atom was performance: due to its reliance on a browser-based renderer, performance suffers vs draw calls in a native GUI. There are also improvements that can be made to Atom to ensure more non-blocking UI actions.

The Atom team has been working on xray, a text editor designed for performance whose improvements will be incorporated into the editor.

VSCode has also been a lot better on the performance front than Atom (still orders of magnitude slower than native editors). I tried it out recently and found the performance gain for me has been imperceptible, so it’s probably not worth the effort to lose the extension and keybinding knowledge.

 

 

 

The Why of Disp Pt. 1: The Syntax

Over the past few weeks, I’ve spent some intensive time on Disp, a programming language that looks syntactically like lisp, with the goal of making managing large codebases easier.

There’s a lot of ideas that went into it’s design, so I wanted to lay them out in a series here. I’m looking for feedback, so don’t hesitate to reply back if you disagree or have ideas. Also please check out the RFCs and leave some thoughts.

This first series is around the choice of lisp + indentation. Specifically, disp syntax looks something like:

(no highlighting unfortunately, Disp is it’s own fun syntax).

It’s very lisp-like: you’ll see the standard parens, which represent a function invocation. But you also see that some parens are missing, and indentation exists instead.

There are two extra rules to manage these syntactical changes:

  • every newline is considered an implicit expression
  • indentation means that you are providing a list to the previous, less indented statement.

It means the following two forms are identical to the parser:

to help reduce the parentheses, every statement on a new line is considered to be an expression, and a list as an argument can be represented with an indented block.

There were a couple reasons for this choice:

Readability

A major complaint with lisp is the number of parentheses required, making it hard to see where parentheses begin and where they end. Lisp experts say that you get used to it and it’s not a major issue in the long run. There are also plugins for many editors that match the parentheses using colors, that helps as well.

However, considering the language will be responsible for the parsing anyway, it seemed intuitive to remove unneeded symbols if the purpose could still be clear to a reader. The easiest removal was the surrounding of parenthesis on a newline statement: at that point, the syntax looks very similar to a non-lisp based programming language:

The indentation rules enable an almost python-like syntax: in many cases a block is represented by a list of statements or expressions, so allowing one to enumerate results in a similar level of readability, as far as expressions go:

Semantic Indentation for Consistency

Many languages are moving to the singleidiomatic formatting paradigm, which does a great job of quelling style discussions which ultimately have minor benefits for the reader, but consume a large amount of time. I think this is a must for a language for large organizations, and Disp continues in that vein.

Adding semantic meaning to indentation is almost a self-fulfilling prophecy: by adding semantic meaning style becomes more consistent, and it’s possible to add semantic meaning to indentation because style is consistent. It would be a waste to not use it to semantically.

Indentation is also used to improve readability and denote blocks of code in most styleguides, so it’s not a far stretch to use it as such.

Tabs instead of Spaces for Indentation

I’m sure this choice is a bit more controversial, but Disp uses tabs for indentation, instead of spaces (unfortunately many code snippets here use spaces because it’s hard to type tabs in browser).

There are many different flavors of indentation to choose from. Python, for example, is extremely lenient and allows one to use a mix of both. In the name of consistency and simplified parsing, it made sense to choose a single one.

Tabs was chosen to allows developers to modify the tab width settings in their IDE, choosing the spacing that is more legible to them.

Conclusion

Thanks for reading! I’m looking for any help to improve the readability or remove unneeded syntax. There’s more in this series coming up, so stay tuned.

 

 

 

 

Using Rust functions in LLVM’s JIT

LLVM is an amazing framework for building high-performance programming languages,
and Rust has some great bindings with llvm-sys. One challenge
was getting functions authored in Rust exposed to LLVM. To make this happen, there’s a few steps to walk through.

1. Exposing the Rust functions as C externs

When LLVM interfaces with shared libraries, it uses the C ABI protocol to do so. Rust provides a way to build do this, out of the box, using the ‘extern “C”‘ declaration:

extern "C" pub fn foo() {
  println!("foo");
}

This instructs the Rust compiler that this should be exposed in a way where it can be found and used as a library. In the case of an executable binary, this is still the case.

The big gotcha here is ensuring that you are declaring the function as public, AND you are declaring it as public in the main module too. If the function was located in a child module, you will need to re-export in the main file:

// src/my_mod.rs

extern "C" pub fn foo() {
  println!("I'm a shared library call");
}

// main.rs
mod my_mod
// note the pub here.
pub use self::my_mod::foo;