Erik McClure

Rust Async Makes Me Want To Gouge My Eyes Out


One of the most fundamental problems with Rust is the design of Result. It is a lightweight, standardized error return value, similar to C-style error codes but implemented at a type system level that can contain arbitrary information. They are easy to use and very useful, and the ecosystem encourages you to use them over panic! whenever possible. Unfortunately, this ends up creating a problem. Result is not like a C++ exception because it doesn’t contain a stacktrace by default, nor does the compiler have any idea where it was first constructed, unless the error type it contains decides to include that information upon construction by using backtrace.

You can catch an unwinding panic!, which is implemented much more like a C++ exception, but a panic! that aborts cannot be caught and it’s impossible to tell if a panic will unwind at compile-time because the behavior can be changed at runtime. Another interesting difference is that the panic handler is invoked before the panic starts unwinding the stack. These implementation details push you towards using the panic handlers purely as uncaught exception handlers, when you can do nothing but perhaps save critical information and dump a stacktrace before exiting. As a result, panic! is designed for and used almost exclusively for unrecoverable errors.

If you want an error that is recoverable, you use Result… which doesn’t have a backtrace, unless you and all your dependencies are using some variant of anyhow (or it’s various forks), which allows you to add backtrace information. If the error is coming from a dependency that doesn’t use anyhow, you’re screwed. There actually is an RFC to fix this, but it’s been open for six years and shows no signs of being merged anytime soon.

“But”, I hear you ask, “what does this have to do with Rust Async?” Well, like most things, Rust Async makes this annoying part of Rust twice as annoying because the default behavior is to silently eat the error and drop the future, unless you have stored the join handle somewhere, and you are in a position where you can access that join handle to find out what the actual error was. The API for making tokio panic when an unhandled panic happens is still unstable, with the interesting comment of “dropping errors silently is definitely the correct default behavior”. Really? In debug mode? In release mode, fine, that’s reasonable, but if I’ve compiled my program in debug mode I’m pretty sure I want to know if random errors are being thrown. Even with this API change, you’ll have to manually opt-in to it, they won’t helpfully default to this behavior when you compile in debug mode.

Until that feature gets stabilized, you basically have to throw all your JoinHandle’s into a JoinSet blender so you can tell when something errored out, and unless you are extremely sure you didn’t accidentally drop any JoinHandle’s on the floor (because Rust does not warn you if you do this), you probably need a timeout function even after your main future has returned, in case there are zombie tasks that are still deadlocked.

Oh, have I mentioned deadlocks? Because that’s what Rust async gives you instead of errors. Did you forget to await something? Deadlock. Did you await things in the wrong order? Deadlock. Did you forget to store the join handle and an error happened? Deadlock. Did you call a syncronous function that invokes the async runtime 5 layers deep in the callstack because it doesn’t know it’s already inside an async call and you forgot it tried to do that? Deadlock. Did you implement a poll() function incorrectly? Deadlock.

For simple deadlocks, something like tokio-console might be able to tell you something useful (“might” is doing a lot of work here). However, any time you forget to await something, or don’t call join on the localset, or add things to the wrong localset, or your poll function isn’t returning the right value, or the async waker was called incorrectly, you’re just screwed, because there is no list of “pending futures that have not been awaited yet” that you can look through, unless you get saved by your IDE noticing you didn’t await the future, which it often doesn’t. It definitely doesn’t tell you about accidentally dropping a JoinHandle, which is one of the most common issues.

But why would you have to implement a poll function? That’s reserved for advanced users– Nope, nope, actually you have to do that when implementing literally any AsyncRead/AsyncWrite trait. Oh, sorry, there’s actually 4 different possible AsyncRead/AsyncWrite implementations and they’re all slightly different and completely incompatible with each other, but they’re all equally easy to fuck up. Everything in Rust Async is absurdly easy to fuck up, and your reward is always the same: [your-program].exe has been running for over 60 seconds.

I haven’t even mentioned how the tokio and futures runtimes are almost the same but have subtle differences between them, and tokio relies on aspects of futures that have been factored out into future-util, which should be in the standard library but isn’t because the literal only thing they actually standardized on was std::future itself. All this is ignoring the usual complaints about async function color-coding - I’m complaining about obnoxious implementation footguns on top of all the usual annoyances involved with poll-based async. Trying to use async is like trying to use a hammer made out of hundreds of tiny footguns hot-glued together.

I wish async was just one cursed corner of Rust that had its warts relatively self-contained, but that isn’t the case. Rust async is a microcosm of an endless stream of basic usability problems that the language simply doesn’t fix, and might not ever fix. I’m honestly not sure how they’re going to fix the split-borrow problem because the type system isn’t powerful enough to encode where a particular borrow came from, which is required to implement spatially disjoint borrows, which ends up creating an endless cascade of subtle complications.

For example, there are quite a few cases where serde_json errors are not very helpful. None of these situations would matter if you could open a debugger and go straight to what was throwing the error, but you can’t because this is Rust and serde_json doesn’t implement anyhow so you can’t inject any errors. format_serde_error was created to solve this exact problem, but it is no longer maintained and is buggy. Also, artifact dependencies still aren’t stabilized, despite the very obvious use-case of needing to test inter-process communication that comes up in basically any process management framework? So this crazy hack exists instead.

Rust’s ecosystem heavily relies on two undebuggable-by-default constructions: macros and async, which makes actually learning how to debug production Rust code about as fun as pulling your own teeth out. I have legitimately had an easier time hunting down memory corruption errors in C++ then trying to figure out where a particular error is being thrown when it is hidden inside a macro inside an error with no stacktrace information, because C++ has mature tooling for hunting down various kinds of memory errors.

Because of last year’s shenanigans, I am no longer confident that any of these problems will be fixed anymore. Rust’s development has slowed to a crawl, and it seems like it’ll take years to stabilize features like variadic generics, which are currently still in the design phase despite all the problems the ecosystem runs into without them. It is extremely frustrating to see comments saying “oh the ecosystem is just immature” when those comments are 5 years old. On the other hand, I am tired of clueless C or C++ fans trying to throw bricks at Rust over these kinds of problems when C++ has far worse sins. Because of this, I will continue building all future projects in Rust, at least until the dependently typed language I’m working on has a real compiler instead of a half-broken interpreter.

Because hey, at least it isn’t C.


Engineers Only Get Paid If Something Is Broken


Recently, Rust people have been getting frustrated with C developers who seem to base their entire personal identity on being able to do unsafe memory things. The thing is, this is not a phenomenon unique to C developers, or even software developers in general, although the visibility of the kernel C developers is higher than the average programmer. Most people will be upset about technology that makes their particular skillset irrelevent. This is best summarized by a famous quote:

"It is difficult to get a man to understand something, when his salary depends upon his not understanding it" — Upton Sinclair

Professional C developers who dislike Rust’s borrow checker are almost always extremely good at doing manual memory management while making very few detectable errors (I say detectable because they usually miss some edge-case that turns into a security nightmare 10 years later). They are either paid lots of money to do this specific task, or they have almost no experience doing anything else. They are one-trick ponies, and they are terrified of their one skill being made irrelevent, at which point they will either no longer make lots of money, or not have a job at all.

You could argue that they could simply learn Rust, but you must understand that they believe they are singularly talented at C, which in some cases may actually be true. If they start learning Rust now, they might end up just being an average Rust developer, and the thought of being average is absolutely terrifying to them. This is because it’s percieved to be a loss of social status, which human brains are hardwired to avoid at all costs. It sounds like they’re about to be “deported” because that is the exact psychological response that potentially losing social status provokes.

It’s not just languages, either. When programmers ask “why is this ecosystem such a disaster”, half the time it’s because somebody is getting paid to deal with it. Our industry is trapped in an endless loop of a startup building a new technology on top of some half-broken ecosystem, exploding in popularity, and then everyone using the startup’s technology hires people to deal with the ecosystem it’s built on top of, and those people don’t actually want anyone to fix it or they’ll be out of a job. There is no escaping the fact that, if someone was getting paid to deal with something that was broken, and you fix it, you just made them irrelevent.

In 30 years, when Rust is slowly being replaced by something better, Rust developers will behave the exact same way. Someone will invent a borrow checker that is much more powerful and capable of solving most of the annoying borrow situations that baffle the current Rust borrow checker. Their response will be that this language is for “lazy” programmers, who don’t want to be as precise as a Real Rust Programmer. They’ll complain about things that don’t make any sense because they’ve never actually used the language they’re complaining about. The Rust programmers will sound just as dumb as C programmers do today.

I know this will happen because this already happens in literally every other field in existence. Musicians still sometimes claim that if you can’t play an actual instrument you aren’t a “real” musician, whatever that is. There was a big fight when Photoshop came out because artists complained that “ctrl-Z” was cheating and if you can’t paint on a real canvas you aren’t a Real Artist. It’s everywhere, and it’s universal.

This is not a programmer problem, it’s a people problem. When you look at this through the lens of livelihoods being threatened, you can instantly see that this is all the exact same instinctual human reaction: they have a high status because they are incredibly skilled at a particular thing, and New Thing is threatening to make that skill either irrelevent, or less important, and they don’t want to lose status.

The best defense against this behavior seems to be skill generalization and good self-esteem. If you are confident in your abilities as a musician, you don’t need to worry about people who are good at using a sequencer, instead you might try to team up with them. If you are confident in your general problem solving abilities as a programmer, then the language barely matters, what matters is which language is best suited for the problem at hand.

Software engineering in particular seems to suffer from hyper-specialization, with people having jobs working with extremely specific frameworks, like React, or Kubernetes, or whatever the newest Javascript framework is. It might be that the complexity of our problems are outstripping our tool’s abstractions, but regardless of what the cause is, if we don’t get things under control soon, this will just keep getting worse.


Measuring Competence Is Epistemic Hell


Sturgeon’s law states that 90% of everything is crap. Combined with Hanlon’s Razor, we arrive at the inescapable conclusion that most problems are caused by incompetence. What’s particularly interesting is that the number of incompetent people in a system tends to increase the higher up you go. Part of this is due to the Peter Principle, where organizations promote employees until they become incompetent, but this happens in the first place because it becomes harder to measure competence’ the longer it takes the effects of actions to be felt, and as a species we have no way of measuring long-term incompetence. Instead, we rely on social cues, and tend to use whatever our local culture determines is “competent”.

One way to try to address this is to teach better critical thinking, but this almost always runs into fierce objections from parents who don’t want schools to “undermine parental authority”, which is what happened with the 2012 Republican Party of Texas platform (original). This kind of thinking is actually fairly common, and it is not a fluke of human nature - it is a feature.

To understand why humans can be inquisitive and intelligent on an individual level, but follow arbitrary and sometimes counterproductive rituals on a cultural level, you must understand that our ancestors lived in epistemic hell. My favorite example is the tribe that had a very long and complicated ritual for preparing manioc, which contained dangerous amounts of cyanide:

In the Americas, where manioc was first domesticated, societies who have relied on bitter varieties for thousands of years show no evidence of chronic cyanide poisoning. In the Colombian Amazon, for example, indigenous Tukanoans use a multistep, multiday processing technique that involves scraping, grating, and finally washing the roots in order to separate the fiber, starch, and liquid. Once separated, the liquid is boiled into a beverage, but the fiber and starch must then sit for two more days, when they can then be baked and eaten.

[..] even if the processing was ineffective, such that cases of goiter (swollen necks) or neurological problems were common, it would still be hard to recognize the link between these chronic health issues and eating manioc. Low cyanogenic varieties are typically boiled, but boiling alone is insufficient to prevent the chronic conditions for bitter varieties. Boiling does, however, remove or reduce the bitter taste and prevent the acute symptoms.

So, if one did the common-sense thing and just boiled the high-cyanogenic manioc, everything would seem fine. [..] Consider what might result if a self-reliant Tukanoan mother decided to drop any seemingly unnecessary steps from the processing of her bitter manioc. She might critically examine the procedure handed down to her from earlier generations and conclude that the goal of the procedure is to remove the bitter taste. She might then experiment with alternative procedures by dropping some of the more labor-intensive or time-consuming steps. She’d find that with a shorter and much less labor-intensive process, she could remove the bitter taste. Adopting this easier protocol, she would have more time for other activities, like caring for her children. Of course, years or decades later her family would begin to develop the symptoms of chronic cyanide poisoning.

Thus, the unwillingness of this mother to take on faith the practices handed down to her from earlier generations would result in sickness and early death for members of her family. Individual learning does not pay here, and intuitions are misleading. — "The Secret Of Our Success" by Joseph Henrich

Without modern tools, there is no possible way (other than acquiring brain damage from chronic cyanide poisoning), for an ancient human to realize that every step of the ritual is actually necessary, because without extensive experimentation over many human lifetimes, it isn’t obvious what danger the ritual is guarding against, and if it’s working as intended, no one will have seen the danger or be able to know about it in the first place! It seems that evolution always kept around enough sacrificial intelligent humans to tinker with new possible rituals, but always ensured that the majority of the population would obey the safe, known ways of doing things, without questioning them, because trying to rationally evaluate an opaque ritual meant death. Not even the culture itself knew what disaster or point of failure the ritual was actually preventing, only that it kept them alive. Religion is simply a convenient way of packaging rituals; if you look in the rules set out by many ancient religions, a lot of them start looking like “how to run a functioning society” and include things like “keep your toilet clean”. They got popular because they worked, we just had no idea why and in many cases couldn’t have possibly figured out why with the technology at the time. Even worse, if you got it wrong, it could take you decades until you finally manifested an affliction that actually started causing problems.

This is the core evolutionary drive behind religion and conservative mindsets, where obeying authority is paramount to survival. In modern times, we could communicate to our children why doing a particular thing is bad, because we know the entire chain of cause and effect. Just a few hundred years ago, we couldn’t even do that! A famous example is the effort to get iodine added to salt. Doctors didn’t resist the idea of adding iodine to salt for no reason, they resisted it because at every dosage amount that seemed like it could have an effect, it made people sick! They had experiments on fish that showed that iodine seemed to make goiters go away, but giving people iodine supplements would always make them sick. At this point in time, nobody had any evidence whatsoever that micronutrients existed. Giving people just 150 micrograms of iodine a day, accomplished by evenly mixing tiny grains of potassium iodide into a kilogram of salt, seemed like homeopathic medicine. There was no known substance that had any effect at that little concentration. Only by taking a leap of faith could Otto Bayard theorize that perhaps we needed just a tiny amount of iodine, going against all known nutritional science at the time.

Humans likely evolved culture as an alternative to animal’s reliance on old pack members to know what to do in case an extremely rare but devastating event happened every hundred-ish years. Rituals could seem completely nonsensical inside a single human lifespan, because they addressed problems at a societal level that only happened every 200 years, or slow acting chronic issues. In one case, ancient elephant matriarchs were the only ones capable of remembering waterholes far enough from a drought that only happened once every 35 years. The packs that lost their matriarchs all died because they had lost this knowledge.

We evolved logic to solve problems that had clear first-order effects, but we aren’t very good at evaluating second-order effects. Long lived humans were capable of finding cause and effect links that happened over a human lifespan, but only human culture perpetuating strange and bizarre rituals created out of random experimentation could deal with problems that had very long, unknowable cause and effect chains. It is very hard to tell if the person building your house is competent if the house only collapses every 150 years when a massive earthquake hits. Various cultures have developed all sorts of indirect methods of measuring competence, but many of them emphasize students obeying their teachers, because the teachers are often perpetuating rituals that are critically important without actually understanding why the rituals are important or what they guard against. It is culture guarding against Chesterton’s fence over enormous timespans. Another good example of epistemic hell is how we cured scurvy by accident and then ruined the cure:

Originally, the Royal Navy was given lemon juice, which works well because it contains a lot of vitamin C. But at some point between 1799 and 1870, someone switched out lemons for limes, which contain a lot less vitamin C. Worse, the lime juice was pumped through copper tubing as part of its processing, which destroyed the little vitamin C that it had to begin with.

This ended up being fine, because ships were so much faster at this point that no one had time to develop scurvy. So everything was all right until 1875, when a British arctic expedition set out on an attempt to reach the North Pole. They had plenty of lime juice and thought they were prepared — but they all got scurvy. The same thing happened a few more times on other polar voyages, and this was enough to convince everyone that citrus juice doesn’t cure scurvy.

Our ancestors weren’t stupid. They were trying to find some kind of logical progression of cause-and-effect, but they lived in epistemic hell. This is why cargo-cult programming exists. This is why urban legends persist. This is why parents simply want their children to do as they say. This is why we have youtubers chastising NASA for not reading their own Apollo 11 postmortem. This is why corporate procedures emphasize checking boxes instead of critically examining the problem. When your cause-and-effect chain is a hundred steps long and caused by something 5 years ago, economic pressure incentivizes simply trying to avoid blame instead of trying to find the actual systemic problem. The farther up the chain of management a problem is, the longer it takes for the effects to be felt, and the worse we get at finding the root cause. Software engineering has the same issue, where incompetence may only cause performance issues years later, after the original coder has left, and the system has scaled up beyond a critical breaking point. This is why we still don’t know how to hire programmers.

Only in the modern era do we have the necessary technological progress and the historical records to be able to accurately evaluate the usefulness of our rituals. Only now can we watch chemical reactions happen at an atomic level. Only now can we have Just Culture and blameless post-mortems that allows identifying actual systemic failures. Only now can I watch a YouTube video explaining how to go from a quantum simulation of particle collisions to a dynamical fluid simulation. Only now can I watch a slow-motion capture at 200000 frames per second to see exactly how a tiny filament explodes into hot globules that then fly into a nest of zirconium filings and set it aflame exactly where each one lands.

The engineers who invented these flashbulbs couldn’t see any of this. They had to infer it from experimentation, whereas I can just watch it happen and immediately understand exactly what is going on. We live in a pivotal moment of human history, on the cusp of being able to truly understand the entire chain of cause-and-effect for why things happen. We have the ability to measure events with unprecedented accuracy, to tease out tiny differences that catalyze huge reactions.

Unfortunately, the ability to merely see cause-and-effect is not sufficient when large systems tend to be chaotic. We do not yet have good mathematical frameworks for predicting emergent behavior, and our ability to analyze complex chaotic systems is still in its infancy. We know that large groups of humans consistently display emergent behavior, such as crowd dynamics closely following the equations of fluid dynamics. Likewise, large human organizations are themselves largely emergent behavior, and we never really understood how they were working in the first place. Organizational competence, and coordination problems in general are our modern epistemic hell, and it means there is no easy way for us to address the failure of our institutions, because we still have no holistic way to analyze the effectiveness of a given organization.

We are tantalizingly close to grasping the true nature of reality, to having the machinations of the universe laid out before us, but we are still missing the tools to fully analyze subtle patterns, to lift a whispered signal out of the thundering noise of spacetime. There is simply no escape from emergent behaviors evolving out of chaotic systems. Until we have the means to analyze these kinds of complex systems, we will forever be at odds with our nature, still tempted to cling on to superstitions of old, because long ago, that was the only thing that kept us alive.


We Could Fix Everything, We Just Don't


I remember growing up with that same old adage of how you could be the next scientist to invent a cure for cancer, or a solution to climate change, or whatever. What they don’t tell you is that we already have solutions for a lot of problems, we just don’t use them. Sometimes this is because the solution is too expensive, but usually it’s because competing interests create a tragedy of the commons. Most problems in the modern age aren’t complicated engineering problems, they’re the same problem: coordination failure.

It was recently unveiled that basically every single UEFI SecureBoot implementation ever made can be bypassed with a malicious image file. This means that any manufacturer that allows the user to customize the boot image is now vulnerable to a complete bypass of SecureBoot and Intel Boot Guard. Luckily, the fix for this is pretty simple: don’t make the logo customizable. But how did something this absurd happen in the first place?

The results from our fuzzing and subsequent bug triaging unequivocally say that none of these image parsers were ever tested by IBVs or OEMs. We can confidently say this because we found crashes in almost every parser we tested. Moreover, the fuzzer was able to find the first crashes after running just for a few seconds and, even worse, certain parsers were crashing on valid images found on the Internet. — binarly.io

It’s pretty obvious what happened, actually. The image parsers were written with the assumption they’d only ever need to load an image file provided by the manufacturer. When this assumption was violated, all hell broke loose, because we don’t test software anymore. None of this happened because engineering is hard. None of this happened because of some tricky, subtle bug. It happened because the people writing the image parsers made an incredibly stupid mistake and then didn’t bother testing it, because the software industry doesn’t bother with QA anymore. Thus, there was no swiss cheese. There was just one slice of cheese with a gaping hole in it, because it turns out that some manufacturers decided to let users customize their boot image, thinking it would be harmless, and that by itself was enough to wreak havoc.

Every layer of this problem is a different flavor of coordination failure. No one on the team who implemented this either thought that there might need to be a warning about untrusted images, or whoever did bring it up was ignored because it was supposed to be handled by another team. Except whoever was supposed to put in a warning about this either wasn’t told, or buried it inside a technical document nobody ever reads. The vendors who decided to implement user-customizable boot logos didn’t ask whether this would be a problem, or weren’t told about it.

And nobody, not a single layer in this clown train, implemented a proper QA or pentesting process that could have caught this bug, because we just don’t bother testing anything anymore. Our economic incentives have somehow managed to incentivize building the worst possible piece of shit that still technically works. We know how to avoid this situation. We have decades of experience building in-depth QA processes that we are simply ignoring. We could fix this, we just don’t.

This is not exclusive to software, as this fantastic video about the popcorn button explains. Our economic race to the bottom has been sabotaging almost every aspect of engineering in our society. To save a few cents per microwave, the cheap microwaves don’t include a humidity sensor and then lie about having a popcorn button when it can’t actually work properly, which leads to everyone saying “don’t use the popcorn button” and now nobody uses the popcorn buttons even on microwaves that actually have a humidity sensor and a working popcorn button. The cheapskates control the supply chain now. They have pissed in the proverbial pool, and if this sounds familiar, that’s because it’s a classic example of the Tragedy of the Commons.

Except, that’s not an excuse. What’s truly absurd is that the tragedy of the commons isn’t inevitable. We know this because ancient human tribes managed to navigate responsible utilization of common resources all the time. It has no historical basis whatsoever. The tragedy of the commons only happens when you have a total failure of collective action. It is the original symptom of societal enshittification.

[...] many nomadic pastoralist societies of Africa and the Middle East in fact "balanced local stocking ratios against seasonal rangeland conditions in ways that were ecologically sound", reflecting a desire for lower risk rather than higher profit...

We actually have a cure for blood cancer now, by the way. Like, we’ve done it. It’s likely that a similar form of immunotherapy will generalize to most forms of cancer. Unfortunately, the only approved gene therapy we have is for sickle-cell disease and costs $2 million per patient, so most people in America simply assume they will never be able to afford any of these treatments, even if they were dying of cancer, because insurance will never cover it. This is actually really bad, because if nobody can afford the treatment, then biotech companies won’t bother investing into it, because it’s not profitable! We have built a society that can’t properly incentivize CURING CANCER. This is despite the fact that socialized healthcare is a proven effective strategy (as long as the government doesn’t sabotage it). We could fix this, we just don’t.

Some people try to complain that this happens because democracy is hard, or whatever, and they’re also wrong. We know exactly what’s wrong with our current voting systems and CGP Grey even put out a video on it 13 fucking years ago. It inevitably results in a two-party system, because strategic voting is rational behavior, and you can’t break out of this two-party system because of the spoiler effect, and the solution is Ranked Choice Voting (or the Alternative Vote). If you want to go further and address gerrymandering you can use the Single Transferable Vote. All of these better systems were proposed decades ago. We have implemented exactly none of them for the presidential election (except for Maine and Alaska). In fact, America still uses the electoral vote system, which is strictly worse than the popular vote, we all know it’s worse, and we even have a potential solution but we still can’t get rid of it due to counterproductive societal interests.

We HAVE solutions for these problems. We just don’t use them. We could be running fiber-optic cable to every house in America, and we even know how much it would cost. We just don’t because we gave the money to corporations who then used none of it and instead paid themselves huge bonuses. We know that automation is chipping away at low-skill jobs, which means our workforce needs to be better educated, and that providing free college to everyone would be a good idea, we just don’t. We know how to build interstate high-speed commuter rail, we just don’t (although Biden is trying). We could fix everything, we just don’t.

We have no excuses anymore. None of these are novel or difficult problems, not even the tragedy of the commons. We can do better. We don’t need AI to fix things. We don’t need new technology to solve these problems. We already know how to do better, we’ve just dug ourselves into a cooperation slump that’s so bad we can’t even implement solutions we already know about, let alone invent new ones. We’re in this hole simply because society is run by people who are incentivized to sabotage cooperation in the name of profits. That’s it.

It’s January 1st of the new year, and with all these people wishing each other a “better year”, I am here to remind you that it will only get worse unless we do something. Society getting worse is not something you are hallucinating. It cannot be fixed by you biking to work, or winning the lottery. We are running on the fumes of our wild technological progress of the past 100 years, and our inability to build social systems that can cooperate will destroy civilization as we know it, unless we do something about it.

We live in what is perhaps the most critical turning point in all of human history, and we’re on a ship that has drifted far off course. The rapid current of technology means that we are swept along faster and faster, making it exponentially harder to steer away from the icebergs ahead of us. We must address our coordination failures. We must build systems that foster better cooperation, or this century won’t be a turning point for humanity, it will be the end of humanity.

"All that would remain of us would be a thin layer in some future rock face. This is the future we must avoid at all costs." — John D. Boswell (melodysheep)

People Can't Care About Everything


Sorry, I need my computer to work

I originally posted an even more snarky response to this, but later deleted it when I realized they were just a teenager. Kids do not have decades of experience with buggy drivers, infuriating edge-cases, and broken promises necessary to understand and contribute to the underlying debate here (nor do they have the social context to know that Xe and I were just joking with each other). Of course, they also don’t know that it’s generally considered poor taste to interject like this, as it tends to annoy everyone and almost always fails to take into consideration the greater context in which someone might be using Windows, or Mac, or TikTok, or Twitter, or whatever corporate hellscape they are trapped in. The thing is, there’s always a reason. You might not like the reason, but there is usually a reason someone has locked themselves inside the Apple ecosystem, or subjected themselves to Twitter, or tried to eke a living from beneath the lovecraftian whims of YouTube’s recommendation algorithm.

People can only care about so much.

They can’t care about everything. You might think something is important, and it probably is, but… so is everything else. Everything matters to someone, and everything is important to society in general to some degree. Some people think that YouTube isn’t very important, but they’re objectively wrong, as YouTube creators reach billions of people. They change people’s lives on a daily basis. We could argue about how important art and music and creativity is to society, yet observe that our capitalist hellhole treats creatives as little more than wage slaves, but then we’d be here all day.

As this blog post bemoaning the loss of Bandcamp explains, They Can And Will Ruin Everything You Love. The only thing that is important to the money vultures is… money. The only people who can build another Bandcamp are people who believe it’s important. I particularly care about the Bandcamp debacle because one of my hobbies is writing music, and I prefer selling it on Bandcamp. If Bandcamp dies, I will no longer have anywhere to offer downloadable lossless versions of my songs. Everything has devolved into shitty streaming services, and there’s nothing I can do about it. I’m too busy fixing everything else that’s broken, there’s no time for me to build a Bandcamp alternative and I’m terrible at web development anyway. Don’t get me started on whether the new solution should be FOSS, because some people believe FOSS is important, and they’d be right! Just look at Cory Doctorow’s talk about enshittification and how proprietary platforms are squeezing the life out of us.

Everything is important!!!

…But I can’t care about everything. You can’t care about everything either, you have to pick your battles. No, that’s too many battles, put some back. That’s still too many battles. You only have 16 waking hours every day to do anything. You have to pick something, and everything you care about has a cost. When everything is important, nothing happens. No websites are created. No projects are built. No progress is made. We simply sit around, bikeshedding over whose pet issue is the most important. There are always trade-offs, and sometimes you can make the wrong ones:

As the corresponding blog post later elaborates on, when you are 19 / still a student / unemployed, time is all you have to spend. It can be easy to forget how valuable time is to some people. Even if I won’t touch Apple devices with a 10-foot-pole, I can understand why people use them. If all your use cases fall inside Apple’s supported list of behaviors, it can be great to have devices that just work (assuming you can afford them, of course). On the other hand, while I prefer Windows, I know many people who use Linux because Windows either won’t let them do what they want, or literally just doesn’t even work. They are willing to put in the time and effort to make their linux machines work just the way they want, and to maintain them, and occasionally do batshit insane source-code patches that I hopefully will never have to do in my life, because it’s important to them.

Back when I was still writing fiction, I got a great comment from an editor who said something along the lines of “writing should be fun, you should only pursue perfection as far as you enjoy.” You can spend your entire life chasing perfection, but you’ll never reach it, and at some point you have to ship something. I’ve been trying to finish up some songs for an album recently and I’ve had to rely on formulaic crutches more than I want to, because at the end of the day, it’s just a hobby, and I simply don’t have the time to be as experimental as I want. My choice is to either release an okay song, or none at all. You can tell where I was hopelessly chasing an unattainable goal for over two years when my output completely stops:

Song Output

Everyone has to make trade-offs, and it can take time to figure out which ones are right for you. Not everyone can contribute to your particular social cause. When you ask someone to care about something, you are implicitly asking them to stop caring about something else, because they have a finite amount of time. They can’t do everything. In order to help you, they must give up something else. Is it grocery shopping? Time to cook? Time to sleep? A social gathering? Playtime with their children?

By no means should you stop asking people to care about something, that part is kind of important. Raising awareness allows individuals to make informed decisions about what trade-offs they are making with their time. However, if someone says they aren’t interested in something you care about… it’s because they have different priorities, and the trade-offs didn’t make sense. Maybe they care more about adding a feature to a 50 year old programming language, and thank goodness they did, because would you have cared enough to put up with this nonsense?

Your time is precious. Other people’s time, doubly so. Mind it well.


Avatar

Archive

  1. 2025
  2. 2024
  3. 2023
  4. 2022
  5. 2021
  6. 2020
  7. 2019
  8. 2018
  9. 2017
  10. 2016
  11. 2015
  12. 2014
  13. 2013
  14. 2012
  15. 2011
  16. 2010
  17. 2009