commit unsaved work before starting work on astro migration

This commit is contained in:
2026-02-26 15:03:44 -05:00
parent bef34007d4
commit c81531a092
4 changed files with 387 additions and 0 deletions

View File

@@ -0,0 +1,18 @@
---
title: 'Advent of Languages 2024, Day 5: Roc'
date: 2025-09-30
draft: true
toc: false
---
<script>import Sidenote from '$lib/Sidenote.svelte';</script>
I know I said at the beginning that I was most likely not going to get through much of Advent of Code before running out of steam and giving up on the whole different-language-every-day thing, but man, even I wasn't expecting to only make it through 4 days before giving up. That's a new low, even for me.
I think the problem is that I decided, in my hubris, that I was going to do Lisp for Day 5--and not a nice, friendly modern Lisp like Clojure, either, but straight-up old-fashioned Common Lisp.<Sidenote>Which isn't even the oldest of Lisps, since it only dates back to 1994. Plenty old enough for me, though.</Sidenote> For some strange reason, as soon as I reached this decision, I discovered that I had lost all interest in the project and was now actively seeking excuses to avoid it. Funny coincidence, huh?
I'm not entirely sure why I'm so subliminally resistant to the idea of using Lisp. I don't think it's _just_ because there's set of people who like to talk about Lisp as if it's superior to every other language that has ever existed or _could_ ever possibly exist, because after every language has some of those. It's more than that. It's the way that they [manage to convey](https://paulgraham.com/avg.html), without ever being so gauche as to actually come out and _say_ it, that using Lisp means that they're a better programmer than you, and smarter than you, and just all-around _better_ than you in every meaningful way. Oh, you disagree? Well, you must just not be smart enough to truly appreciate Lisp. But don't worry, it doesn't make you a _bad person_. Just, you know, not as good as _me_.
Anyway, as you'll note from the title, I eventually decided to give up on Lisp for the time being. By then, of course, it was too late to continue with Advent of Code, so I left the project to languish for most of the subsequent year. But you know what? I've got a hankering to try a new programming language again, and one I've had my eye on for a while is [Roc](https://roc-lang.org).
Roc is a statically-typed, purely<Sidenote>I think.</Sidenote> functional language. It's headed up by the guy who created Elm, which I've heard mostly good things about, and it looks sort of like a more-approachable version of Haskell, with an emphasis on performance--like, actual runtime performance--and approachability. It has a heavy focus on developer tooling as well, so that's kind of cool. Also, it has built-in string interpolation, of the `${expression}` flavor, like Javascript. Except that _unlike_

View File

@@ -0,0 +1,51 @@
---
title: 'Languages: Fast and Slow'
date: 2026-01-01
draft: true
---
WORK IN PROGRESS
<script>import Sidenote from '$lib/Sidenote.svelte';</script>
The other day I came across [this post](https://nesbitt.io/2025/12/26/how-uv-got-so-fast.html), and it kind of got my feathers ruffled a little bit.
Not the post istelf, really. Honestly, it's a great post. You should go read it, because it's clear, well-constructed, and makes a valid point that's backed up with plenty of examples. But it's at least tangentially related to a recurring argument that I see hashed out again and again on places like [HackerNews](https://news.ycombinator.com) and [Lobste.rs](https://lobste.rs), and it's an argument I find really frustrating.
See, everybody knows that some languages are faster than others. You don't have to quantify it down to seventeen decimal places to know, for instance, that Rust is fast and python is slow. But inevitably, in any sufficiently large discussion along these lines, someone will show up and say snidely that "languages aren't fast or slow, algorithms are". As if, were you to _truly_ understand computer science at the deepest level, having transcended the pedestrian concept of a programming language, you would understand that there is only The Algorithm, beside which all else is but chaff and dross.
I don't know for sure where this comes from, but my sense is that this is often a principle instilled by people's university CS educations. And from an educational perspective, it makes a lot of sense: Languages come and go (oh _my_ do they come and go), but the principles of computing are eternal and unchanging.<Sidenote>Well, not exactly eternal - modern architectures have absolutely rendered things like linked lists obsolete, after all. But these changes at least tend to happen over much longer periods than hot-new-language fads.</Sidenote> If I were developing a university CS course, I'd probably do the same.
Moreover, academic CS leans heavily toward a more algorithmic perspective on _everything_. Which also makes sense because algorithms are kind of _what they do_. Pretty much every major capital-A Algorithm was invented by a CS researcher somewhere, from sorting to cryptography to distributed consensus. So _of course_ university CS courses are going to prioritize algorithms.
And I'm sure that if you were to ask a Real Credentialed Computer Scientist, the sort of person who teachese these classes from which dewy-eyed CS graduates emerge shouting that algorithm choice is the only thing that matters, they would admit that the language you choose _can_, indeed, have an effect on your application's performance. But somewhere along the way, the nuance of that message tends to go missing, and you end up with facile HackerNews comments implying that it makes no difference whether you write your app in Zig or PHP.
Standing counter to this perspective, whose proponents I will henceforth term the Algorists, are a host of honest, hard-working, salt-of-the-earth everyman coders,<Sidenote>I'm extrapolating a little bit here.</Sidenote> who insist that no, actually language choice _does_ matter, and _you can feel it_. Some applications just _feel_ fast, and others feel sluggish. And when you take a look at what languages were used to write them, you often (though not always!) discover that the fast ones were written in fast languages, and the slow ones in slow languages.
But I feel like this gives a lot of formally-educated coder types a skewed view of how much language choice really _does_ matter. _In particular_, I think language choice has a _huge_ impact on the sort of "everyday" performance in all the parts of your code that _aren't_ particularly computationally intensive, and thus tend to be less amenable to algorithmic improvements. These kinds of performance problems are particularly insidious because they're more-or-less tied to the overall complexity of a codebase, which is usually low at the start (when it's easy to make big sweeping changes like rewriting in a different language), and increase later on (by which point it's much more difficult).
Now, if you read the preceding few paragraphs, you may notice a preponderance of qualifiers like "I think" and "I feel like" and "my impression is", which might understandably lead you to think that this is all based on gut feelings and personal opinion and doesn't have any hard evidence to back it up. Well, unfortunately, that's more or less true. I _wish_ I had some hard evidence to back it up, but my thesis is that language choice most of all impacts how an app "feels", in a very squishy and intangible way, and it's hard to quantify. But I take some comfort in the fact that even though it's a controversial take to say that languages can be "fast" or "slow", it's not exactly a _fringe_ opinion. Look through those discussions on HackerNews and Lobsters where people are complaining that no one should ever pay attention to anything other than algorithmic improvements, and you'll find plenty of opinions to the contrary. So at least I'm not _alone_, even if I am making arguments based on feels and vibes.
I do, however, think there's _some_ evidence for my position here. Aside from the fact that "rewrite it in a faster language" continues to be a frequently-adopted solution to performance problems, there _are_ specific cases where you can affix a number to the difference between one language and another,<Sidenote>And yes, there are always things like [The Benchmarks Game](https://madnight.github.io/benchmarksgame/), but I have to agree with the language-choice-doesn't matter people a _little_ bit on this one: those benchmarks are often not representative of that language's performance in the real world. Obviously that's why it's called the Benchmarks _Game_ rather than the Benchmark's International Standard of Measurement Project, but people do occasionally point at these benchmarks as "proof" of a language's superiority, which I think is a bit silly.</Sidenote> and [this is one](https://josephg.com/blog/crdts-go-brrr/). Now, if you haven't read that post before you should go do that write now, because it's a) far more interesting than this one, b) written by someone with a greater degree of expertise in low-level computer performance than I will ever likely have, and c) just better written in general. Really, go read it. I'll wait.
Done? Great. Now, you'll notice that a lot of that post _is_ devoted to algorithmic improvements, and the "rewrite it in Rust" part only comes in at the end and is responsible for much less of the overall result than the algorithm stuff. By his numbers, algorithmic improvements account for a speedup of approximately 300x, while the switch to Rust comes in at only about 20x.
It might sound like I'm undermining my own position by bringing this up, but there's a critical factor that I want to highlight here: The rewrite-it-in-Rust improvements come in _after_ the algorithmic stuff. Why does that matter, you may ask? Because of [Amdahl's Law](https://en.wikipedia.org/wiki/Amdahl%27s_law), that's why!
If you're not familiar, Amdahl's law says basically that you can only get so much out of optimizing _part_ of a system. To be specific, you can never speed up a system by optimizing one part any more than you could by _removing that part entirely_ i.e. reducing its execution time down to 0.
So for instance, say you're NPM, and you've decided to split `npm install` into two phases. In Phase 1, you fetch all the indices and package relationships and such and compute the dependency graph of the project. In Phase 2, you ~~compress all the matter in the universe into a single folder on disk~~ download all the code for the resolved set of dependencies and store it on disk so it can be depended-on.<Sidenote>This is just a theoretical example. I'm sure that Real NPM (and yarn, and pnpm, and deno, and bun) all pipeline these steps so that the downloading-packages part can get a head start while the resolving-dependencies part is still chugging along.</Sidenote> Say that Phase 1 takes 40% of the total time, and Phase 2 takes 60%. How much time can you knock off the whole process by improving just the dependency-resolution phase? Amdahl's law says, 40%. This is pretty obvious; if the steps are completely separated then no matter how quick your dependency resolution is your downloading phase still takes just as long.
Okay, but here's the thing: Algorithmic improvements are _sharply_ limited by Amdahl's Law. See, for there to be an algorithmic improvement that you can make, there has to _be an algorithm_ that you can improve. I don't think it's necessary to get into the weeds of what constitutes an algorithm to make the argument that if your problem is "My app isn't performing well," and your proposed solution is "I need to make algorithmic improvements," you need to go _find some algorithms to improve_.
And yes, every codebase is full of algorithms at every turn. Dynamic languages use a lot of hash tables, for instance (most of the time, every property access on a structure/object is a hash table lookup), and there's plenty of algorithm stuff going on there. But when write a blog post describing how you sped up such-and-such process by 100x with algorithmic improvements, you aren't usually talking about going through and finding a bunch of unrelated linear scans over bags of properties and replacing them with hash table lookups. Usually you're talking about something like "we found out that we were acidentally iterating over the whole list of users every time we added to it,", i.e. [Shlemiel the Painter's Algorithm](https://www.joelonsoftware.com/2001/12/11/back-to-basics/), so we fixed that.
Right, so, where am I going with all this? It's my opinion that, if your problem is "performance" broadly, and you limit yourself _exclusively_ to looking for algorithmic improvements, **you will eventually run out of algorithms to improve**. Depending on your particular problem(s), this may or may not be perfectly acceptable. There are plenty of situations where your big problem really _is_ algorithmic in nature. But most of the time, when this is the case, it tends to be something that crops up _at a specific point_ - Cargo's switch to the [sparse index protocol](https://blog.rust-lang.org/inside-rust/2023/01/30/cargo-sparse-protocol/) is a good example of this.<Sidenote>I do distinctly remember sitting there waiting for Cargo to download its index and wondering "why does this have to be so slow, again?"</Sidenote>. In Cargo's case it was particularly bad because it occurred right smack in the middle of a codepath you would hit basically every day you used Cargo. So it was a perfect target for algorithmic improvements.
_On the other hand_, many applications (dare I even say, _most_ applications?) suffer from "death by a thousand cuts" performance issues, where the problem can't really be narrowed down to one specific operation, algorithm, or point in the codebase.<Sidenote>I suspect, though I cannot prove, that React is _particularly_ bad about creating these sorts of death-by-a-thousand-cuts situations.</Sidenote> But guess what? If your performance problem is *gestures expansively at everything*, then the only way to fix it is to _improve everything_. And how do you do that? Rewrite it in a faster language, of course!
Obviously, just like algorithmic improvements, this has limits too. If your app is primarily limited by network latency, for instance, no amount of rewriting in Rust is going to help you. If you're spending all of your time waiting for an LLM to dump a load of ~~garbage~~ tokens in your lap, you have my deepest condolences. But that's not always the case! There are _plenty_ of situations where using a faster language for your application _really does make a difference_, and that's not likely to change.
In other words, the "Languages" section of the Github sidebar will continue to be one of the first things I look at when evaluating a tool, and I refuse to be ashamed of that fact.