Erik McClure

A Rant On Terra


Metaprogramming, or the ability to inspect, modify and generate code at compile-time (as opposed to reflection, which is runtime introspection of code), has slowly been gaining momentum. Programmers are finally admitting that, after accidentally inventing turing complete template systems, maybe we should just have proper first-class support for generating code. Rust has macros, Zig has built-in compile time expressions, Nim lets you rewrite the AST however you please, and dependent types have been cropping up all over the place. However, with great power comes great responsibility undecidable type systems, whose undefined behavior may involve summoning eldritch abominations from the Black Abyss of Rěgne Ūt.

One particular place where metaprogramming is particularly useful is low-level, high-performance code, which is what Terra was created for. The idea behind Terra is that, instead of crafting ancient runes inscribed with infinitely nested variadic templates, just replace the whole thing with an actual turing-complete language, like say, Lua (technically including LuaJIT extensions for FFI). This all sounds nice, and no longer requires a circle of salt to ward off demonic syntax, which Terra is quick to point out. They espouse the magical wonders of replacing your metaprogramming system with an actual scripting language:

In Terra, we just gave in to the trend of making the meta-language of C/C++ more powerful and replaced it with a real programming language, Lua.

The combination of a low-level language meta-programmed by a high-level scripting language allows many behaviors that are not possible in other systems. Unlike C/C++, Terra code can be JIT-compiled and run interleaved with Lua evaluation, making it easy to write software libraries that depend on runtime code generation.

Features of other languages such as conditional compilation and templating simply fall out of the combination of using Lua to meta-program Terra

Terra even claims you can implement Java-like OOP inheritance models as libraries and drop them into your program. It may also cure cancer (the instructions were unclear).

As shown in the templating example, Terra allows you to define methods on struct types but does not provide any built-in mechanism for inheritance or polymorphism. Instead, normal class systems can be written as libraries. More information is available in our PLDI Paper.

The file lib/javalike.t has one possible implementation of a Java-like class system, while the file lib/golike.t is more similar to Google’s Go language.

I am here to warn you, traveler, that Terra sits on a throne of lies. I was foolish. I was taken in by their audacious claims and fake jewels. It is only when I finally sat down to dine with them that I realized I was surrounded by nothing but cheap plastic and slightly burnt toast.

The Bracket Syntax Problem

Terra exists as a syntax extension to Lua. This means it adds additional keywords on top of Lua's existing grammar. Most languages, when extending a syntax, would go to great lengths to ensure the new grammar does not create any ambiguities or otherwise interfere with the original syntax, treating it like a delicate flower that mustn't be disturbed, lest it lose a single petal.

Terra takes the flower, gently places it on the ground, and then stomps on it, repeatedly, until the flower is nothing but a pile of rubbish, as dead as the dirt it grew from. Then it sets the remains of the flower on fire, collects the ashes that once knew beauty, drives to a nearby cliffside, and throws them into the uncaring ocean. It probably took a piss too, but I can't prove that.

To understand why, one must understand what the escape operator is. It allows you to splice an abstract AST generated from a Lua expression directly into Terra code. Here is an example from Terra's website:

function get5()
  return 5
end
terra foobar()
  return [ get5() + 1 ]
end
foobar:printpretty()
> output:
> foobar0 = terra() : {int32}
> 	return 6
> end
But, wait, that means it's… the same as the array indexing operator? You don't mean you just put it inside like–

local rest = {symbol(int),symbol(int)}

terra doit(first : int, [rest])
  return first + [rest[1]] + [rest[2]]
end

What.

WHAT?!

You were supposed to banish the syntax demons, not join them! This abomination is an insult to Nine Kingdoms of Asgard! It is the very foundation that Satan himself would use to unleash Evil upon the world. Behold, mortals, for I come as the harbinger of despair:

function idx(x) return `x end
function gen(a, b) return `array(a, b) end

terra test()
  -- Intended to evaluate to array(1, 2) 0
  return [gen(1, 2)][idx(0)]
end

For those of you joining us (probably because you heard a blood-curdling scream from down the hall), this syntax is exactly as ambiguous as you might think. Is it two splice statements put next to each other, or is a splice statement with an array index? You no longer know if a splice operator is supposed to index the array or act as a splice operator, as mentioned in this issue. Terra “resolves this” by just assuming that any two bracketed expressions put next to each other are always an array indexing operation, which is a lot like fixing your server overheating issue by running the fire suppression system all day. However, because this is Lua, whose syntax is very much like a delicate flower that cannot be disturbed, a much worse ambiguity comes up when we try to fix this.

function idx(x) return `x end
function gen(a, b) return `array(a, b) end

terra test()
  -- This is required to make it evaluate to array(1,2)[0]
  -- return [gen(1, 2)][ [idx(0)] ]
  -- This doesn't work:
  return [gen(1, 2)][[idx(0)]]
  -- This is equivalent to:
  -- return [gen(1, 2)] "idx(0)"
end

We want to use a spliced Lua expression as the array index, but if we don't use any spaces, it turns into a string because [[string]] is the Lua syntax for an unescaped string! Now, those of you who still possess functioning brains may believe that this would always result in a syntax error, as we have now placed a string next to a variable. Not so! Lua, in it's infinite wisdom, converts anything of the form symbol"string" or symbol[[string]] into a function call with the string as the only parameter. That means that, in certain circumstances, we literally attempt to call our variable as a function with our expression as a string:

local lookups = {x = 0, y = 1, z = 2, w = 3 };
  vec.metamethods.__entrymissing = macro(function(entryname, expr)
    if lookups[entryname] then
      -- This doesn't work
      return `expr.v[[lookups[entryname]]]
      -- This is equivalent to
      -- return `expr.v "lookups[entryname]"
      -- But it doesn't result in a syntax error, becase it's equivalent to:
      -- return `extr.v("lookups[entryname]")
    else
      error "That is not a valid field."
    end
  end)

As a result, you get a type error, not a syntax error, and a very bizarre one too, because it's going to complain that v isn't a function. This is like trying to bake pancakes for breakfast and accidentally going scuba diving instead. It's not a sequence of events that should ever be related in any universe that obeys causality.

It should be noted that, after a friend of mine heard my screams of agony, an issue was raised to change the syntax to a summoning ritual that involves less self-mutilation. Unfortunately, this is a breaking change, and will probably require an exorcism.

The Documentation Is Wrong

Terra's documentation is so wrong that it somehow manages to be wrong in both directions. That is, some of the documentation is out-of-date, while some of it refers to concepts that never made it into master. I can only assume that a time-traveling gremlin was hired to write the documentation, who promptly got lost amidst the diverging timelines. It is a quantum document, both right and wrong at the same time, yet somehow always useless, a puzzle beyond the grasp of modern physics.

  • The first thing talked about in the API Reference is a List object. It does not actually exist. A primitive incarnation of it does exist, but it only implements map() and insertall(). Almost the entire section is completely wrong for the 1.0.0-beta1 release. The actual List object being described sits alone and forgotten in the develop branch, dust already beginning to collect on it's API calls, despite those API calls being the ones in the documentation… somehow.
  • :printpretty() is a function that prints out a pretty string representation of a given piece of Terra code, by parsing the AST representation. On it's face, it does do exactly what is advertised: it prints a string. However, one might assume that it returns the string, or otherwise allows you to do something with it. This doesn't happen. It literally calls the print() function, throwing the string out the window and straight into the stdout buffer without a care in the world. If you want the actual string, you must call either layoutstring() (for types) or prettystring() (for quotes). Neither function is documented, anywhere.
  • Macros can only be called from inside Terra code. Unless you give the constructor two parameters, where the second parameter is a function called from inside a Lua context. This behavior is not mentioned in any documentation, anywhere, which makes it even more confusing when someone defines a macro as macro(myfunction, myfunction) and then calls it from a Lua context, which, according to the documentation, should be impossible.
  • Struct fields are not specified by their name, but rather just held in a numbered list of {name, type} pairs. This is documented, but a consequence of this system is not: Struct field names do not have to be unique. They can all be the same thing. Terra doesn't actually care. You can't actually be sure that any given field name lookup will result in, y'know, one field. Nothing mentions this.
  • The documentation for saveobj is a special kind of infuriating, because everything is technically correct, yet it does not give you any examples and instead simply lists a function with 2 arguments and 4 interwoven optional arguments. In reality it's absolutely trivial to use because you can ignore almost all the parameters. Just write terralib.saveobj("blah", {main = main}) and you're done. But there isn't a single example of this anywhere on the entire website. Only a paragraph and two sentences explaining in the briefest way possible how to use the function, followed by a highly technical example of how to initialize a custom target parameter, which doesn't actually compile because it has errant semicolons. This is literally the most important function in the entire language, because it's what actually compiles an executable!
  • The defer keyword is critical to being able to do proper error cleanup, because it functions similar to Go's defer by performing a function call at the end of a lexical scope. It is not documented, anywhere, or even mentioned at all on the website. How Terra manages to implement new functionality it forgets to document while, at the same time, documenting functionality that doesn't exist yet is a 4-dimensional puzzle fit for an extra-dimensional hyperintelligent race of aliens particularly fond of BDSM.
  • You'd think that compiling Terra on Linux would be a lot simpler, but you'd be wrong. Not only are the makefiles unreliable, but cmake itself doesn't seem to work with LLVM 7 unless you pass in a very specific set of flags, none of which are documented, because compiling via cmake isn't documented at all, and this is the only way to compile with LLVM 7 or above on the latest Ubuntu release!

Perhaps there are more tragedies hidden inside this baleful document, but I cannot know, as I have yet to unearth the true depths of the madness lurking within. I am, at most, on the third or fourth circle of hell.

Terra Doesn't Actually Work On Windows

Saying that Terra supports Windows is a statement fraught with danger. It is a statement so full of holes that an entire screen door could try to sell you car insurance and it'd still be a safer bet than running Terra on Windows. Attempting to use Terra on Windows will work if you have Visual Studio 2015 installed. It might work if you have Visual Studio 2013 installed. No other scenarios are supported, especially not ones that involve being productive. Actually compiling Terra on Windows is a hellish endeavor comparable to climbing Mount Everest in a bathing suit, which requires either having Visual Studio 2015 installed to the default location, or manually modifying a Makefile with the exact absolute paths of all the relevant dependencies. At least up until last week, when I submitted a pull request to minimize the amount of mountain climbing required.

The problem Terra runs into is that it tries to use a registry value to find the location of Visual Studio and then work out where link.exe is from there, then finds the include directories for the C runtime. This hasn't worked since Visual Studio 2017 and also requires custom handling for each version because compiling an iteration of Visual Studio apparently involves throwing the directory structure into the air, watching it land on the floor in a disorganized mess, and drawing lines between vaguely related concepts. Good for divining the true nature of the C library, bad for building directory structures. Unfortunately, should you somehow manage to compile Terra, it will abruptly stop working the moment you try to call printf, claiming that printf does not actually exist, even after importing stdio.h.

Many Terra tests assume that printf actually resolves to a concrete symbol. This is not true and hasn't been true since Visual Studio 2015, which turned several stdio.h functions into inline-only implementations. In general, the C standard library is under no obligation to produce an actual concrete symbol for any function - or to make sense to a mere mortal, for that matter. In fact, it might be more productive to assume that the C standard was wrought from the unholy, broiling chaos of the void by Cthulhu himself, who saw fit to punish any being foolish enough to make reasonable assumptions about how C works.

Unfortunately, importing stdio.h does not fix this problem, for two reasons. One, Terra did not understand inline functions on Windows. They were ephemeral wisps, vanishing like a mote of dust on the wind the moment a C module was optimized. A pull request fixed this, but it can't fix the fact that the Windows SDK was wrought from the innocent blood of a thousand vivisected COMDAT objects. Microsoft's version of stdio.h can only be described as an extra-dimensional object, a meta-stable fragment of a past universe that can only be seen in brief slivers, never all at once.

Luckily for the Terra project, I am the demonic presence they need, for I was once a Microsoftie. Long ago, I walked the halls of the Operating Systems Group and helped craft black magic to sate the monster's unending hunger. I saw True Evil blossom in those dark rooms, like having only three flavors of sparkling water and a pasta station only open on Tuesdays.

I know the words of Black Speech that must be spoken to reveal the true nature of Windows. I know how to bend the rules of our prison, to craft a mighty workspace from the bowels within. After fixing the cmake implementation to function correctly on Windows, I intend to perform the unholy incantations required to invoke the almighty powers of COM, so that it may find on which fifth-dimensional hyperplane Visual Studio exists. Only then can I disassociate myself from the mortal plane for long enough to tackle the stdio.h problem. You see, children, programming for Windows is easy! All you have to do is s͏̷E͏l͏̢҉l̷ ̸̕͡Y͏o҉u͝R̨͘ ̶͝sơ̷͟Ul̴

For those of you who actually wish to try Terra, but don't want to wait for me to fix everything a new release, you can embed the following code at the top of your root Terra script:

if os.getenv("VCINSTALLDIR") ~= nil then
  terralib.vshome = os.getenv("VCToolsInstallDir")
  if not terralib.vshome then
    terralib.vshome = os.getenv("VCINSTALLDIR")
    terralib.vclinker = terralib.vshome..[[BIN\x86_amd64\link.exe]]
  else
    terralib.vclinker = ([[%sbin\Host%s\%s\link.exe]]):format(terralib.vshome, os.getenv("VSCMD_ARG_HOST_ARCH"), os.getenv("VSCMD_ARG_TGT_ARCH"))
  end
  terralib.includepath = os.getenv("INCLUDE")

  function terralib.getvclinker()
    local vclib = os.getenv("LIB")
    local vcpath = terralib.vcpath or os.getenv("Path")
    vclib,vcpath = "LIB="..vclib,"Path="..vcpath
    return terralib.vclinker,vclib,vcpath
  end
end

Yes, we are literally overwriting parts of the compiler itself, at runtime, from our script. Welcome to Lua! Enjoy your stay, and don't let the fact that any script you run could completely rewrite the compiler keep you up at night!

The Existential Horror of Terra Symbols

Symbols are one of the most slippery concepts introduced in Terra, despite their relative simplicity. When encountering a Terra Symbol, one usually finds it in a function that looks like this:

TkImpl.generate = function(skip, finish) return quote
    if [TkImpl.selfsym].count == 0 then goto [finish] end 
    [TkImpl.selfsym].count = [TkImpl.selfsym].count - 1
    [stype.generate(skip, finish)]
end end

Where selfsym is a symbol that was set elsewhere.

“Aha!” says our observant student, “a reference to a variable from an outside context!” This construct does let you access a variable from another area of the same function, and using it to accomplish that will generally work as you expect, but what it's actually doing is much worse more subtle. You see, grasshopper, a symbol is not a reference to a variable node in the AST, it is a reference to an identifier.

local sym = symbol(int)
local inc = quote [sym] = [sym] + 1 end

terra foo()
  var [sym] = 0
  inc
  inc
  return [sym]
end

terra bar()
  var[sym] = 0
  inc
  inc
  inc
  return [sym]
end

Yes, that is valid Terra, and yes, the people who built this language did this on purpose. Why any human being still capable of love would ever design such a catastrophe is simply beyond me. Each symbol literally represents not a reference to a variable, but a unique variable name that will refer to any variable that has been initialized in the current Terra scope with that particular identifier. You aren't passing around variable references, you're passing around variable names.

These aren't just symbols, they're typed preprocessor macros. They are literally C preprocessor macros, capable of causing just as much woe and suffering as one, except that they are typed and they can't redefine existing terms. This is, admittedly, slightly better than a normal C macro. However, seeing as there have been entire books written about humanity's collective hatred of C macros, this is equivalent to being a slightly more usable programming language than Brainfuck. This is such a low bar it's probably buried somewhere in the Mariana Trench.

Terra is C but the Preprocessor is Lua

You realize now, the monstrosity we have unleashed upon the world? The sin Terra has committed now lies naked before us.

Terra is C if you replaced the preprocessor with Lua.

Remember how Terra says you can implement Java-like and Go-like class systems? You can't. Or rather, you will end up with a pathetic imitation, a facsimile of a real class system, striped down to the bone and bereft of any useful mechanisms. It is nothing more than an implementation of vtables, just like you would make in C. Because Terra is C. It's metaprogrammable C.

There can be no constructors, or destructors, or automatic initialization, or any sort of borrow checking analysis, because Terra has no scoping mechanisms. The only thing it provides is defer, which only operates inside Lua lexical blocks (do and end)… sometimes, if you get lucky. The exact behavior is a bit confusing, and of course can only be divined by random experimentation because it isn't documented anywhere! Terra's only saving grace, the singular keyword that allows you to attempt to build some sort of pretend object system, isn't actually mentioned anywhere.

Of course, Terra's metaprogramming is turing complete, and it is technically possible to implement some of these mechanisms, but only if you either wrap absolutely every single variable declaration in a function, or you introspect the AST and annotate every single variable with initialization statuses and then run a metaprogram over it to figure out when constructors or destructors or assignment operators need to be called. Except, this might not work, because the (undocumented, of course) __update metamethod that is supposed to trigger when you assign something to a variable has a bug where it's not always called in all situations. This turns catching assignments and finding the l-value or r-value status from a mind-bogglingly difficult, herculean task, to a near-impossible trial of cosmic proportions that probably requires the help of at least two Avengers.

There Is No Type System

If Terra was actually trying to build a metaprogramming equivalent to templates, it would have an actual type system. These languages already exist - Idris, Omega, F*, Ada, Sage, etc. but none of them are interested in using their dependent type systems to actually metaprogram low-level code (although F* can produce it). The problem is that building a recursively metaprogrammable type system requires building a proof assistant, and everyone is so proud of the fact they built a proof assistant they forget that dependent type systems can do other things too, like build really fast memcpy implementations.

Terra, on the other hand, provides only the briefest glimpse of a type system. Terra functions enjoy what is essentially a slightly more complex C type system. However, the higher-level Lua context is, well, Lua, which has five basic types: Tables, Functions, Strings, Booleans and Numbers (it also has Thread, Nil, Userdata and CData for certain edge cases). That's it. Also, it's dynamic, not static, so everything is a syntax or a runtime error, because it's a scripting language. This means all your metaprogramming is sprinkled with type-verification calls like :istype() or :isstruct(), except the top came off the shaker and now the entire program is just sprinkles, everywhere. This is fine for when your metaprograms are, themselves, relatively simple. It is not fine when you are returning meta-programs out of meta-meta-functions.

This is the impasse I find myself at, and it is the answer to the question I know everyone wants to know the answer to. For the love of heaven and earth and all that lies between, why am I still using Terra?

The truth is that the project I'm working on requires highly complex metaprogramming techniques in order to properly generate type-safe mappings for arbitrary data structures. Explaining why would be an entire blog post on it's own, but suffice to say, it's a complex user interface library that's intended to run on tiny embedded devices, which means I can't simply give up and use Idris, or indeed anything that involves garbage collection.

What I really want is a low-level, recursively metaprogrammable language that is also recursively type-safe, in that any type strata can safely manipulate the code of any layer beneath it, preferably via algebriac subtyping that ensures all types are recursively a subset of types that contain them, ad nauseam. This would then allow you to move from a “low-level” language to a “high-level” language by simply walking up the tower of abstraction, building meta-meta-programs that manipulate meta-programs that generate low-level programs.

Alas, such beauty can only exist in the minds of mathematicians and small kittens. While I may one day attempt to build such a language, it will be nothing more than a poor imitation, forever striving for an ideal it cannot reach, cursed with a vision from the gods of a pristine language no mortal can ever possess.

I wish to forge galaxies, to wield the power of computation and sail the cosmos upon an infinite wave of creativity. Instead, I spend untold hours toiling inside LLVM, wondering why it won't print “Hello World”.

In conclusion, everything is terrible and the universe is on fire.


RISC Is Fundamentally Unscalable


Today, there was an announcement about a new RISC-V chip, which has got a lot of people excited. I wish I could also be excited, but to me, this is just a reminder that RISC architectures are fundamentally unscalable, and inevitably stop being RISC as soon as they need to be fast. People still call ARM a “RISC” architecture despite ARMv8.3-A adding a FJCVTZS instruction, which is “Floating-point Javascript Convert to Signed fixed-point, rounding toward Zero”. Reduced instruction set, my ass.

The reason this keeps happening is because the laws of physics ensure that no RISC architecture can scale under load. The problem is that a modern CPU is so fast that just accessing the L1 cache takes anywhere from 3-5 cycles. This is part of the reason modern CPUs rely so much on register renaming, allowing them to have hundreds of internal registers that are used to make things go fast, as opposed to the paltry 90 registers actually exposed, 40 of which are just floating point registers for vector operations. The fundamental issue that CPU architects run into is that the speed of light isn't getting any faster. Even getting an electrical signal from one end of a CPU to the other now takes more than one cycle, which means the physical layout of your CPU now has a significant impact on how fast operations take. Worse, the faster the CPU gets, the more this lag becomes a problem, so unless you shrink the entire CPU or redesign it so your L1 and L2 caches are physically closer to the transistors that need them, the latency from accessing those caches can only go up, not down. The CPU might be getting faster, but the speed of light isn't.

Now, obviously RISC CPUs are very complicated architectures that do all sorts of insane pipelining to try and execute as many instructions at the same time as possible. This is necessary because, unless your data is already loaded into registers, you might spend more cycles loading data from the L1 cache than doing the actual operation! If you hit the L2 cache, that will cost you 13-20 cycles by itself, and L3 cache hits are 60-100 cycles. This is made worse by the fact that complex floating-point operations can almost always be performed faster by encoding the operation in hardware, often in just one or two cycles, when manually implementing the same operation would've taken 8 or more cycles. The FJCVTZS instruction mentioned above even sets a specific flag based on certain edge-cases to allow an immediate jump instruction to be done afterwards, again to minimize hitting the cache.

All of this leads us to single instruction multiple data (SIMD) vector instructions common to almost all modern CPUs. Instead of doing a complex operation on a single float, they do a simple operation to many floats at once. The CPU can perform operations on 4, 8, or even 16 floating point numbers at the same time, in just 3 or 4 cycles, even though doing this for an individual float would have cost 2 or 3 cycles each. Even loading an array of floats into a large register will be faster than loading each float individually. There is no escaping the fact that attempting to run instructions one by one, even with fancy pipelining, will usually result in a CPU that's simply not doing anything most of the time. In order to make things go fast, you have to do things in bulk. This means having instructions that do as many things as possible, which is the exact opposite of how RISC works.

Now, this does not mean CISC is the future. We already invented a solution to this problem, which is VLIW - Very Large Instruction Word. This is what Itanium was, because researchers at HP anticipated this problem 30 years ago and teamed up with Intel to create what eventually became Itanium. In Itanium, or any VLIW architecture, you can tell the CPU to do many things at once. This means that, instead of having to build massive vector processing instructions or other complex specialized instructions, you can build your own mega-instructions out of a much simpler instruction set. This is great, because it simplifies the CPU design enormously while sidestepping the pipelining issues of RISC. The problem is that this is really fucking hard to compile, and that's what Intel screwed up. Intel assumed that compilers in 2001 could extract the instruction-level parallelism necessary to make VLIW work, but in reality we've only very recently figured out how to reliably do that. 20 years ago, we weren't even close, so nobody could compile fast code for Itanium, and now Itanium is dead, even though it was specifically designed to solve our current predicament.

With that said, the MILL instruction set uses VLIW along with several other innovations designed to compensate for a lot of the problems discussed here, like having deferred load instructions to account for the lag time between requesting a piece of data and actually being able to use it (which, incidentally, also makes MILL immune to Spectre because it doesn't need to speculate). Sadly, MILL is currently still vaporware, having not materialized any actual hardware despite it's promising performance gains. One reason for this might be that any VLIW architecture has a highly unique instruction set. We're used to x86, which is so high-level it has almost nothing to do with the underlying CPU implementation. This is nice, because everyone implements the same instruction set and your programs all work on it, but it means the way instructions interact is hard to predict, much to the frustration of compiler optimizers. With VLIW, you would very likely have to recompile your program for every single unique CPU, which is a problem MILL has spent quite a bit of time on.

MILL, and perhaps VLIW in general, may have a saving grace with WebAssembly, precisely because it is a low-level assembly language that can be efficiently compiled to any architecture. It wouldn't be a problem to have unique instruction sets for every single type of CPU, because if you ship WebAssembly, you can simply compile the program for whatever CPU it happens to be running on. A lot of people miss this benefit of WebAssembly, even though I think it will be critical in allowing VLIW instruction sets to eventually proliferate. Perhaps MILL will see the light of day after all, or maybe someone else can come up with a VLIW version of RISC-V that's open-source. Either way, we need to stop pretending that pipelining RISC is going to work. It hasn't ever worked and it's not going to work, it'll just turn into another CISC with a javascript floating point conversion instruction.

Every. Single. Time.


Software Engineering Is Bad, But That's Not Why


I've been writing code for over 12 years, and for a while I've been disgusted by the sorry state of programming. That's why I felt a deep kinship with Nikita Prokopov's article, Software Disenchantment. It captures the intense feeling of frustration I have with the software industry as a whole and the inability of modern programmers to write anything even remotely resembling efficient systems.

Unfortunately, much of it is wrong.

While I wholeheartedly agree with what Nikita is saying, I am afraid that he doesn't seem to understand the details behind modern computers. This is unfortunate, because a misinformed article like this weakens both our positions and makes it more difficult to convince people that software bloat is a problem. Most of the time, his frustrations are valid, but his reasoning is misdirected. So, in this article, I'm going to write counterpoints to some of the more problematic claims.

1. Smooth Scroll

One of my hobbies is game development, and I can assure you that doing anything at 4K resolution at 60 FPS on a laptop is insanely hard. Most games struggle to render at 4K 60 FPS with powerful GPUs, and 2D games are usually graphically simplistic, in that they can have lots of fancy drawings, but drawing 200 fancy images on the screen with hardly any blending is not very difficult. A video is just a single 4K image rendered 60 times a second (plus interpolation), which is trivial. Web renderers can't do that, because HTML has extremely specific composition rules that will break a naïve graphics pipeline. There is also another crucial difference between a 4K video, a video game, and a webpage: text.

High quality anti-aliased and sometimes sub-pixel hinted text at 10 different sizes on different colored backgrounds blended with different transparencies is just really goddamn hard to render. Games don't do any of that. They have simple UIs with one or two fonts that are often either pre-rendered or use signed distance fields to approximate them. A web browser is rendering arbitrary unicode text that could include emojis and god knows what else. Sometimes it's even doing transformation operations in realtime on an SVG vector image. This is hard, and getting it to run on a GPU is even harder. One of the most impressive pieces of modern web technology is Firefox's WebRender, which actually pushes the entire composition pipeline to the GPU, allowing it to serve most webpages at a smooth 60 FPS. This is basically as fast as you can possibly get, which is why this is a particularly strange complaint.

I think the real issue here, and perhaps what Nikita was getting at, is that the design of modern webpages is so bloated that the web browsers can't keep up. They're getting inundated with <div> trees the size of Mount Everest and 10 megs worth of useless javascript bootstrapping ad campaigns that load entire miniature videos. However, none of this has anything to do with the resolution or refresh rate of your screen. Useless crap is going to be slow no matter what your GPU is. Inbox taking 13 seconds to load anything is completely unacceptable, but animating anything other than a white box in HTML is far more expensive than you think, and totally unrelated.

2. Latency

Latency is one of the least understood values in computer science. It is true that many text editors have abysmal response times caused by terrible code, but it's a lot easier to screw this up than people realize. While CPUs have gotten faster, the latency between hardware components hasn't improved at all, and in many cases cannot possibly improve. This is because latency is dominated by physical separation and connective material. The speed of light hasn't changed in the past 48 years, so why would the minimum latency?

The problem is that you can't put anything on top of a system without increasing its latency, and you can't decrease the latency unless you bypass a system. That 42-year-old emacs system was basically operating at its theoretical maximum because there was barely anything between the terminal and the keyboard. It is simply physically impossible to make that system more responsive, no matter how fast the CPU gets. Saying it's surprising that a modern text editor is somehow slower than a system operating at the minimum possible latency makes absolutely no sense, because the more things you put between the keyboard and the screen, the higher your latency will be. This has literally nothing to do with how fast your GPU is. Nothing.

It's actually much worse, because old computers didn't have to worry about silly things like composition. They'd do v-sync themselves, manually, drawing the cursor or text in between vertical blanks of the monitor. Modern graphics draw to a separate buffer, which is then flipped to the screen on it's next refresh. The consequence, however, is that drawing a new frame right after a vertical blank ignores all the input you got that frame! You can only start drawing after you've processed all the user input, so once you start, it's game over. This means that if a vertical blank happens every 16.6 ms, and you start drawing at the beginning of that frame, you have to wait 16.6 ms to process the user input, then draw the next frame and wait another 16.6 ms for the new buffer to get flipped to the screen!

That's 33ms of latency right there, and that's if you don't screw anything up. A single badly handled async call could easily introduce another frame of lag. As modern hardware connections get more complex, they introduce more latency. Wireless systems introduce even more latency. Hardware abstraction layers, badly written drivers, and even the motherboard BIOS can all negatively impact the latency and we haven't even gotten to the application yet. Again, the only way to lower latency is to bypass layers that add latency. At best, perfectly written software would add negligible latency and approach the latency of your emacs terminal, but could never surpass it (unless we start using graphene).

We should all be pushing for low-latency systems, but electron apps are simply the worst offender. This is something everyone, from the hardware to the OS to the libraries, has to cooperate on if we want responsive computers.

3. Features

It seems silly to argue that computers today have no new features. Of course they have new features. A lot of the features are ones I don't use, but they do get new features and occasionally they are actually nice. I think the real problem here that each new feature, for some inexplicable reason, requires exponentially more resources than the feature before it, often for no apparent reason. Other times, basic features that are trivial to implement are left out, also for no apparent reason.

For example, Discord still doesn't know how to de-duplicate resent client messages over a spotty connection despite this being a solved problem for decades, and if a message is deleted too quickly, the deletion packet is received before the message itself, and the client just… never deletes it. This could be trivially solved with a tombstone or even just a temporary queue of unmatched deletion messages, yet the client instead creates a ghost message that you can't get rid of until you restart the client. There is absolutely no reason for this feature to not exist. It's not even bloat, it's just ridiculous.

However, a few other comparisons in here really don't make any sense. For example, an installation of Windows 10 is only 4 GB because of extreme amounts of compression, yet the article compares this with a 6 GB uncompressed basic install of android. Windows 10 is actually 16 GB once it's actually installed (20 GB for 64-bit). While software bloat is a very real problem, these kinds of comparisons are just nonsense.

4. Compilers

Now, this one I really don't understand. Any language other than C++ or Rust basically compiles instantly until you hit 100k lines of code. At work, we have a C# monstrosity that's half a million lines of code and compiles in 20 seconds. That's pretty fast. Most other languages are JIT-compiled, so you can just run them instantly. Even then, you don't really want to optimize for compile time on Release mode unless you're just removing unnecessary bloat, and many modern compilers take a long time to compile things because they're doing ridiculously complex optimizations that may require solving NP-hard optimization problems, which some actually do.

The original C language and Jonathon Blow's language compile really fast because they don't do anything. They don't help you, they have an incredibly basic type system, they don't do advanced memory analysis or a bunch of absurd optimizations to take advantage of the labyrinthine x86-64 instruction set. Languages in the 1990s compiled instantly because they had no features. Heck, sometimes compilation is actually disk bound, which is why getting an SSD can dramatically improve compile times for large projects. This has nothing to do with the compiler!

My question here is what on earth are you compiling that takes hours to compile?! The only projects I'm aware of that take this long are all C++ projects, and it's always because of header files, because header files are terrible and thankfully no other language ever made that mistake ever again. I am admittedly disappointed in Rust's compilation times, but most other languages try to ensure that at least debug compilation is ridiculously fast.

I think the complaints here are mostly due to bloated javascript ecosystems that pile NPM modules on top of each other until even a simple linter takes forever to run, or when coders write their entire program in a completely different language that transpiles to javascript and then minify the javascript and that's if you don't put it through Babel to polyfill back to earlier versions of javascript and… this sure seems like a javascript problem to me, not a general issue with modern compilers.

Hopefully, webassembly will eliminate this problem, at least on the web. As for everywhere else, unless you're using a complex systems programming language, compilation times usually aren't that bad, and even when they are, they exist for a reason.

Why would you complain about memory usage and then in the next breath complain about compilation times? Language features are not free. Rust's lifetime analysis is incredibly difficult to do, but frees the programmer from worrying about memory errors without falling back to a stop-the-world garbage collector, which is important when garbage collection uses 2-3 times more memory than it actually needs (depending on how much performance you want).

Efficient code, fast compilation and memory safety. Pick two. If you feel that compilers can simply magically improve everything, you've probably been living in javascript land for too long, where everything is horrible all the time for no reason. The rest of computing is not actually that horrible. The real problem is that most things are now being written in javascript, so everyone inherits all of javascript's problems. If you don't want javascript's problems, stop using it.

Conclusion

I care very deeply about the quality of code that our industry is putting out, and I love the Better World Manifesto that Nikita has proposed. However, it is painful for me to read an article criticizing bad engineering that gets the technical details wrong. A good engineer makes sure he understands what he's doing, that's the whole point of the article! If we truly want to be better engineers, we need to understand the problems we face and what we can do to fix them. We need to understand how our computers work and fundamental algorithmic trade-offs we make when we compare problem-solving approaches.

Years ago, I wrote my own article on this, and I feel it is still relevant today. I asked if anyone actually wants good software. At least now, I know some people do. Maybe together, we can do something about it.


Why Do People Use The Wrong Email?


Ever since 2013, I've consistently started getting registration e-mails in foreign languages from sites I definitely did not sign up for.

It started with Instagram, on which a bizarrely determined young boy from somewhere around Denmark was trying to register using my e-mail address. Instagram lets you remove an e-mail from an account, which is what I did, repeatedly, but the kid kept adding the non-functional e-mail back on to the account. Eventually I forced a password reset and forcibly deleted his account, in an attempt to dissuade him from using someone else's e-mail in the future. Astonishingly, this did not work, and I was forced to register on Instagram just to prevent my e-mail from being used.

He shared a first name with me, and I noticed his name on a few of the other e-mails I had gotten. At first, I thought it was just this one kid, possibly related to the infamous gmail dot issue, but astoundingly, most of the time the e-mail had no dots and no apparent typos, it was just… my e-mail. Then I started getting even weirder e-mails.

  • Someone else near Denmark used my e-mail to open an Apple ID. I went in to disable the account and it included payment information and their home address, along with the ability to remotely disable their apple device.
  • I once got a Domino's order receipt from someone on Rhode Island, which included their full name, home address, and phone number.
  • Just recently, someone signed up for Netflix, got the account temporarily suspended for lack of payment, and then added a payment option before I decided to go in and change the e-mail while also signing up for Netflix so I wouldn't have to deal with that anymore. I could see part of the credit card payment option they had used.
  • Another time, I woke up to someone in a european timezone creating an account on Animoto and then uploading 3 videos to it before I could reset the password and lock out the account.
  • At least two sites included a plaintext password in the e-mail, although they didn't seem very legitimate in the first place.

What's really frightening is discovering just how fragile many of these websites are. Most of them that allow you to change your e-mail address don't require the new e-mail to be verified, allowing me to simply change it to random nonsense and render the account permanently inaccessible. Others allow your account to function without any sort of e-mail verification whatsoever.

One of my theories was that people just assumed they were picking a username that happened to have @gmail.com on the end of it. My e-mail is my first name and a number, which probably isn't hard for someone also named Erik to accidentally choose. However, some of these e-mails are for people clearly not named Erik, so where is the e-mail coming from? Why use it?

So far, I've had my e-mail used incorrectly to sign up for these services:

  • Netflix (Spanish) - Cely A.
  • PlayView (Spanish)
  • Mojang (English)
  • Apple ID (Danish) - Seier Madsen
  • Telekom Fon (Hungarian)
  • Nutaku (English) - Wyled1
  • Samsung (Spanish)
  • Forex Club (Russian) - Eric
  • Marvel Contest of Champions (Portuguese)
  • Jófogás (Hungarian)
  • Wargaming.net (Russian)
  • Deezer (English) - Erik Morales
  • Crossfire (Portuguese)
  • Instagram (Danish) - Erikhartsfield
  • List.am (Armenian)
  • ROBLOX (English) - PurpleErik18
  • cccraft.net (Hungarian)
  • ThesimpleClub (German)
  • Cadastro Dabam (Portuguese)
  • Első Találkozás (Hungarian) - Rosinec
  • Pinterest (Portuguese) - Erik
  • MEGA (Spanish)
  • mestermc.hu (Hungarian) - Rosivagyok
  • Snapchat (English)
  • Skype (Swedish)
  • PlayIT (Hungarian) - hírlevél
  • Animoto (English) - Erik
  • Geometry Dash (English) - erikivan1235
  • Club Penguin (Spanish)
  • LEGO ID (English) - szar3000
  • Seejaykay.com (English)
  • Dragon's Prophet (English)
  • Sweepstakes (English) - ErikHartsfield
  • School.of.Nursing (English) - ErikHartsfield
  • SendEarnings (English) - ErikHartsfield
  • Talkatone (English) - Cortez
  • Anonymous VPN (English)
  • Penge (Hungarian)
  • Apple ID (Swedish) - Erik
  • Snapchat (Swedish) - kirenzo
  • Snapchat (Swedish) - erik20039
  • ROBLOX (English) - Mattias10036
  • Riot Games (English) - epik991122
  • Instagram (English) - opgerikdontcare
  • Goodgame Empire (English) - rulererikman

Given how fundamental e-mail is to our modern society, it's disconcerting that some people, especially young kids, have no idea how powerful an e-mail is. When they provide the wrong e-mail for a service, they are handing over the master keys to their account. These services use e-mail as a primary source of identification, and some of them don't even seem to realize they're using the wrong e-mail.

Perhaps this speaks to the fact that, despite all the work large software corporations claim they put into making intuitive user interfaces, basic aspects of our digital world are still arcane and confusing to some people. Forget trying to replace passwords with biometrics, some people don't even understand how e-mail works. Maybe the software industry needs to find a more intuitive way to assert someone's identity.

Or maybe people are just dumb.


Software Optimizes to Single Points of Failure


Whenever people talk about removing single points of failure, most of the suggestions involve “distributed systems” that are resilient to hardware failures. For software, we've invented code signing and smart contracts via blockchain to ensure the code we're running is what we expected to run.

But none of these technologies can prevent a bug from taking down the entire system.

A lot of people point to Google being a single point of failure. They are only partially correct, because Google's hardware is distributed and extremely redundant. No single piece of hardware in a Google Data center failing can take down the entire data center. You could probably nuke the entire data center and most Google services could fall back to another data center. In fact, Google has developed software canaries to catch bugs from propagating too far into production in an attempt to address the problem of their software being a single point of failure.

But something did take down the entirety of Google Compute once. It was a software bug in the canary itself. Of course, all the canaries were running the same software, so all of them had the same bug, and none of them could catch the configuration bug that was being propagated to all of their routers.

By creating a software canary, Google had simply shifted the single point of failure to its canary software. It was much harder for it to fail, but it was still a single point of failure, so when it did fail, it took down the entire system.

We've put a lot of work into trying to reduce the amount of bugs in mission critical systems, going so far as to try to create provably correct software. The problem is that no system can prove that it is free of design-flaws, which occur when the software operates correctly, but does something nobody actually wanted it to do. All of our code-signing and trusted computing initiatives do is make it very difficult for someone to sneak bad code into a widely used library. None of them, however, remove the single point of failure. Should the NSA ever succeed in sneaking in a backdoor to a widely used open source library, it will propagate to everything.

A very well guarded single point of failure is still a single point of failure, no matter how remote the chances of it actually failing. Tom Scott has an excellent video about how a trusted engineer at Google that is allowed to bypass all their security checks could go rogue and remove all the password checks on everything and it would be incredibly hard to stop them.

Physical infrastructure is much more resilient to these kinds of problems, because even if every piece of infrastructure has the same problem, you still have to physically get to it in order to exploit the problem. This makes it very hard for anyone to simultaneously sabotage any country's offline infrastructure without an incredible amount of work. Software, however, lets us access everything from everywhere. The internet removes physical access as a last resort.

Of course, this is not an insurmountable problem, but it is deceptively difficult to overcome. For example, let's say we have a bunch of drones we're controlling. To avoid one bug from taking all of them out at once, half of them run one flying program and the other run a completely different flying program, developed independently. Unfortunately, both of these programs rely on the same library that reads the gyroscope data. If that library has a bug, the entire swarm will crash into a mountain. Having the swarm calculate the data for each other and compare results doesn't help, because everyone gets the wrong result. The software logic itself is wrong.

The reason this is so insidious is that it runs counter to sane software development practices. To minimize bugs, we minimize complexity, which means writing the least amount of code possible, which inadvertently optimizes to a single point of failure. We re-use libraries and share code. We deliberately try to solve problems exactly once and re-use this code everywhere in our program. This a good thing because it means any bugs we fix propagate everywhere else, but this comes at the cost of propagating any bugs we introduce.

Soon, our world will be consumed by automation, one way or another. Cory Doctorow suggests that hardware should only run software the user trusts, but what if I end up trusting buggy software? If all our self-driving cars run the same software, what happens when it has a bug? Even worse, what if all the different self-driving car companies have their own software, custom built by highly paid engineers… that all use OpenSSL to securely download updates?

What if OpenSSL has a bug?

It's not clear what can be done about this. Obviously we shouldn't go around introducing unnecessary complexity that creates even more bugs, but at the same time we shouldn't delude ourselves into thinking our distributed systems have no single point of failure. They may be robust to hardware failures, but the software they run on will continue to be a single point of failure for the foreseeable future.


Avatar

Archive

  1. 2020
  2. 2019
  3. 2018
  4. 2017
  5. 2016
  6. 2015
  7. 2014
  8. 2013
  9. 2012
  10. 2011
  11. 2010
  12. 2009