Erik McClure

The Technological Tsunami


My relationship with AI is getting increasingly strange. Generalist AIs are still mostly useless, but narrow AIs continue to produce very impressive results. We have plenty of AIs that are better than any human at specific tasks like spotting cancer, but no AIs that can exercise common sense. We can synthesize terrifyingly realistic recreations of almost anyone’s voice, but they must be handheld by humans to produce consistent emotional inflection. We have self-driving cars that work fine at noon on a clear day with no construction, but an errant traffic cone makes them panic.

This is called “spiky intelligence”, and it is why ChatGPT can solve incredibly difficult math olympiad questions but struggle to push a button on a webpage. It seems to me that all these smart people with PhDs saying that AI will take over the workforce are convinced that, if AIs can continually get better at tackling difficult problems, eventually they’ll be able to train AIs that can also handle “easy” problems.

This is the exact same error that resulted in the second AI winter of the 90s - when researchers built expert systems could outperform humans in narrow situations, they simply assumed they would soon be able to outperform all humans in all situations. This, obviously, didn’t happen, but task-specific engines did emerge from this, and now it is a well-known fact that your phone has enough processing power to effortlessly destroy every single human chess grandmaster that has ever lived. We still play chess, though.

What worries me is that this kind of spiky intelligence, despite lacking general common sense, will still radically upend the economy in ways that are simply impossible for a human brain to anticipate, because these AIs are, by definition, alien intelligences that defy all human intuition. The coming AI revolution is dangerous not because it’s going to destroy the whole world if we get it wrong, but because it is almost impossible to anticipate in any meaningful way. No human is capable of accurately guessing what weirdly specific task an AI might find easy or extraordinarily difficult. It’s XKCD #1425 but randomized for every single task on the entire planet:

In CS, it can be hard to explain the difference between the easy and the virtually impossible

AI enthusiasts often like to talk about The Singularity, a point in time when technological progress accelerates beyond human understanding and thus the future beyond it becomes unknowable. To me, this is not a very useful thing to think about. After all, we’re already incapable of predicting what society will look like 10 years from now. What’s concerning is that we’re used to being able to prepare for the next 2-3 years (ignoring black swan events), and I anticipate that AI will cause economic chaos in ways we cannot predict, precisely because it will rapidly automate entire categories of human employment out of existence, randomly. What will happen when we start automating entire industries faster than we can retrain people? What happens when someone tries to migrate to another career only to have that career automated away the moment they graduate?

We already struggle to keep up with the rapid pace of change, and AI is about to automate everything even faster, in extremely unpredictable ways. There may be a moment when it becomes impossible to anticipate the trajectory of your career three months from now, without AGI ever happening. We don’t need a superintelligent godlike AI to fuck everything up, the extraordinarily powerful narrow AIs we’re working on right now can fuck up the whole global economy by themselves. This moment is the only “Singularity” that I care about - a sort of Technological Tsunami, when entire economic sectors are swept away by rapid automation so quickly that workers can’t course correct to a new field fast enough.

We have options, since we know that it will fuck up the economy, we just don’t know how. The easiest and most pragmatic solution is UBI, but this seems difficult to make happen in a society run by rich people who are largely rewarded by how evil they are. There are plenty of political groups that are pushing for these kinds of solutions, but global policymakers appear to have been captured by AI money, which is only concerned about the dangers of a mythological AGI superintelligence instead of the impending economic catastrophe that is already beginning to develop. Because of this, I think there is a real question over whether or not human society will survive the coming technological tsunami. Again, we don’t need to invent AGI to destroy ourselves. We didn’t need AI to build nukes.

With that said, some people seem to deny that AGI will ever happen, which is also clearly wrong, at least to me. There are many things that will eventually happen, based on our current understanding of physics (and assuming we don’t blow ourselves up). Eventually we’ll cure cancer. Eventually we’ll reverse aging. Eventually we’ll have cybernetic implants and androids. Eventually we’ll be able to upload human minds to a computer. Eventually we’ll build a general artificial intelligence capable of improving itself. It might take 10 years or 100 years or 1000 years, but these are all things that will almost certainly happen given enough time and effort, we just don’t know when. At the very least, if you build as much computational power as the entire combined brainpower of the human race, you’ll be able to brute-force a superintelligence of some kind, and we’d have better solved the alignment problem by then, or augmented ourselves enough to handle it.

At the same time, AI companies continue making wild extrapolations about the capabilities of AIs that simply don’t line up with real world performance. You cannot assume that an AI that scores better than all humans at every test will actually be good at anything other than taking tests, even if humans who score highly on those tests sometimes accomplish amazing things. I have a friend who was placed in Mensa at a very young age after scoring high on an IQ test. They complain that the only thing this group of very smart people do is argue about how to run the organization and what the latest cool puzzles are.

If you know you are actually much more intelligent than the statistical average, increase your humility. It is too easy to believe your own judgements, to get stuck in your own bullshit. Being smart does not make you wise. Wisdom comes from constantly doubting yourself, and questioning your own thoughts and beliefs. Never think, even for a moment, that you have 'settled' anything completely. It's okay to know you are bright, it is not okay to think that gives you any certainty or authority of understanding. — Chatoyance

The world’s smartest people are struggling to extrapolate the capabilities of an extremely spiky and utterly alien narrow intelligence, because it defies basic human intuition. Assuming an AI will be good at performing arbitrary tasks because it scored well on a test is the same kind of attribution error that happens with experts in a specific field - people will trust the expert’s opinion on something the expert has no experience with, like the economy, even though this almost never works out. This is such a persistent problem because highly intelligent people can invent plausible sounding arguments to support almost any position, and it can be exceedingly difficult to find the logical error in them. We are lucky that our current LLMs usually make egregious errors that are obviously wrong, instead of extremely subtle errors that would be almost impossible to detect.

We are in the middle of an AI revolution that will create new, extraordinarily powerful tools whose effects are almost impossible to predict. Instead of doing anything about the impending economic catastrophe, we are chasing AI safety hysteria and telling AGI superintelligence ghost stories that will likely not happen for decades, if not centuries. Otherwise intelligent people are convincing themselves that there’s no point worrying about the economy crashing if AGI makes humans irrelevant. We’re so busy trying to avoid flying too close to the sun we haven’t noticed a technological tsunami rising up beneath us, and if we continue ignoring it, we’ll drown before we even become airborne.


Leftists Are In A Purity Death Spiral


It seems almost impossible to describe this very simple concept to an increasingly large percentage of leftists: If you disagree with someone’s opinion on [Political Position A], but agree with them on [Political Position B], you can still work with them to make [Political Position B] happen, without compromising your stance on [Political Position A]. This is called forming a political coalition, a temporary alliance to achieve a common goal. Importantly, you must understand that a political party is not and will never be a cohesive collection of people who all agree with each other. It is literally impossible because a political party is so huge and diverse.

Diversity means a diversity of opinions and takes. Due to the existence of an average person being a fallacy, every single person you know is statistically likely to have at least one really weird or messed up opinion about something - they just aren’t telling you about it. If you’re lucky, it’s about something you don’t care about, but the more purity tests you have, the more lines you draw, the more statistically impossible it becomes for anyone to actually pass them all. This applies to any large group - no matter how “cohesive” a particular group seems, statistically it must be formed through the alliance of many different smaller subgroups, recursively. This recursion usually continues until you reach a group with a couple hundred people, which is the size of an average human tribe, and the largest socially cohesive unit that is possible. Every larger group is, in actuality, multiple subgroups that have come together, each with slightly different views.

A series of circles representing groups

Every “bad opinion” you refuse to engage with is another line in the sand. It cuts you off from potential allies. It shrinks the size of your coalition.

A series of circles representing groups with a dotted line cutting through them

Eventually, you enter a purity spiral, where almost no one can satisfy your demand for moral purity. Everyone has made mistakes. Everyone has bad takes on things sometimes. You cannot affect change if your social group has excluded the entire rest of humanity from it:

A series of circles representing groups with a dotted line cutting through them

This purity spiral has strangled so many leftist spaces that it has become a well-known problem. I see people complaining about it constantly, in many different places. They’re scared and frustrated, because every time anyone has a disagreement over something, it’s treated as you being a potential right-wing infiltrator trying to destroy everything, instead of an honest disagreement. This happens because leftists often cannot concieve of someone who is “morally good” having such an “obviously bad take”, except they don’t consider that maybe the problem isn’t as obvious to everyone else. This happens way more often than you think! Why? Because humans are incredibly diverse! But instead of celebrating this diversity of ideas, the left has cultivated a callout culture problem that severely punishes any deviance from their idea of Moral Purity, which itself is inconsistent and depends on who stumbled on your old tweets.

Post by @ninafelwitch@tech.lgbt
View on Mastodon

This kind of behavior is incredibly counterproductive. It creates a low-trust environment where everyone is looking over their shoulders, where people are constantly worried about associating with someone who did something vaguely questionable five years ago. An environment ruled by fear is not one that engenders cooperation. In fact, it does the opposite, because a social environment where people are constantly terrified of internet hate mobs is the perfect environment for fascism to flourish. The left did this to itself. It continues to reject allies that don’t adhere to a subgroup’s specific set of beliefs, which are all mutually incompatible with the others. Nuclear power, scientific research, economic systems, voting systems, guns, crypto, AI, you name it, we have a purity test for it. Leftists think this is keeping their movement “pure” when in reality it’s keeping their movement from actually stopping the fascists.

Post by @contrasocial@mastodon.social
View on Mastodon

Refusing to work with anyone else who doesn’t satisfy your particular moral purity test isn’t “standing for something”, it simply means you are doing the fascists work for them. An old poem comes to mind:

First, they came for the cryptobros, and I did not speak out—
  Because cryptocurrencies are evil, and the world is better off without them.

Then, they came for the ai artists, and I did not speak out—
  Because ai slop is evil, and the world is better off without those that debase art.

Then, they came for the gun enthusiasts, and I did not speak out—
  Because we needed better gun control anyway, we're better off without them.

Then, they came for me—
  and there was no one left to stop them.

Let’s go through some common objections:

"that's not fair, the original poem wasn't about people who were evil! You've used evil people, as if they would help me!"

Yes, that’s the fucking point. They will, in fact, help you, in the right context, under the right circumstances. Refusing that help is suicide.

"I don't care if it's suicide! Unlike you, I'm willing to die for what I believe in!"

Then go ahead and die. You can take your moral purity with you, because the fascists will shoot you all the same. Purity tests are just a convenient way for you to sabotage any effective resistance we could have mobilized against fascists, and once they’ve killed everyone who disagrees with them, the only people left on the planet will be racist psychopaths, and your moral purity will have succeeded in creating a worse future for everyone. You will sway nobody, because you worked with nobody. You vilified every other potential ally, and so they will simply let you die, and you’ll take your morals to the grave.

What’s frustrating about this particular claim is that it is usually a complete fabrication. Almost nobody is actually this dumb. If the fascists start hunting down gay people and a cryptobro offers to smuggle you into another country to save your life, you’re gonna accept the help even if you hate cryptocurrencies, because you don’t wanna die. The thing is, even if it requires a life-threatening situation to force some people to begrudgingly accept help from those they don’t like, nothing ever fundamentally changed - the cryptobro never hated gay people in the first place! They would have been willing to help you the entire time, but you were too obstinate to accept the help until you had a gun pointed at your head!

"If that's the case, then humanity deserves to go extinct"

If this is your honest belief, you either need therapy or you’re some kind of hardcore transhumanist (in which case, 𝓯𝓲𝓷𝓮, 𝓘 𝓰𝓾𝓮𝓼𝓼). Either way, leave the rest of us alone while we try to actually fix things instead of participate in a doomer death cult. The conservatives have enough of those already.

Now, if anyone is still here, let’s walk through the steps required to build a coalition that enshrines trans rights and ends the Gaza genocide, followed by forming a new coalition that bans generative AI, without compromising any morals in the process. First, we must recognize that active genocide and stripping human rights are higher priority than most other issues we care about. This requires internalizing that, while we can find many allies willing to help us end the genocide in Palestine, many of them will have some pretty shitty opinions on things! You’ll have to put up with:

  • People who like cryptocurrencies
  • People who like capitalism
  • People who don’t like [your preferred economic system]
  • People who like the military
  • Yes, people who like AI too.
  • Even people who disagree with you about [that other thing you really care about]

All of these groups are potential allies. You must internalize that you are only working with these groups to achieve one particular result, and that is where your loyalty ends. You are not chaining yourself to cryptobros, or ai artists, or gun nuts, or libertarians. You are only recognizing that, despite the fact that these people have some pretty terrible beliefs sometimes, we all agree that genocide is bad and stripping trans people of human rights is also bad.

Now, the alliance with people who like the military requires using nuance. Yes, I know twitter has apparently make it impossible for people to use nuance, but you need to understand that “people who like the military” is an enormous section of the populace, so there are going to be a lot of subgroups within it. The people who simply believe the strong should rule the weak are the fascists, which are the ones you can never work with. The people who believe that violence should be used to defend your values and never against civilians, on the other hand, will likely be strongly against any kind of genocide.

Use this fact to drive a wedge between the two subgroups, separating out the ones that support you while causing in-fighting that weakens the ones against your position. Once you’ve separated the two subgroups, it won’t be hard to show that the current attacks on trans people is preparing for a genocide as well, which will make it much easier to convince your new allies to also oppose the current attacks on trans groups, even if they didn’t previously care that much. By maximizing the number of people you get on your side (the side of “we shouldn’t let Israel murder innocent people” and “the attacks on trans rights is a precursor to genocide”) you can finally become a real, viable threat to the democrats, who don’t seem to have any actual values anymore, so I can’t actually list them.

A complex venn diagram of the various subgroups the coalition is made of

Now that we’ve replaced the current democrats in office, we build a coalition for banning generative AI, leveraging our previous work. We’ll start with the cryptobros. Yes, I know you hate the cryptobros, and sometimes it’s for a good reason. Now it’s time to use one of those reasons, by driving another wedge between subgroups by identifying the ones who support generative AI versus the ones who don’t care one way or the other. This is easier if you can express exactly why you object to generative AI - even if you have many reasons, picking one (like it’s impact on artistic livelihoods and worker rights in general) gives you a sharper cognitive knife to work with, metaphorically speaking. Knowing exactly what you want makes you less vulnerable to ideas a charismatic person says are good that don’t actually further your goals.

Then you need to go to the AI people. Your goal here is not to throw the entire group under the bus, but to once again leverage nuance to drive apart individual subgroups. When a previously cohesive group realizes it doesn’t actually agree about everything, the group as a whole is greatly weakened. There are many subgroups within AI that only care about AI that has actual research value, like folding proteins or identifying breast cancer or detecting blood cancers. These groups would be happy to support targeted legislation that bans generative AI, like LLMs and image generators, especially if you focus on a specific harmful aspect, like AI being used for misinformation (many AI researchers legitimately want to help society, not harm it). By acquiring allies from some of the AI supporters, without attacking the entire concept of AI as a whole, you’ve shattered their group coherence and greatly weakened the proponents of generative AI.

The other groups likely have a random smattering of support or opposition to generative AI. Pulling in ones already against it will be easy, but the majority of the groups likely don’t care - your job is to pull them to your side, and the best way to do this is by trying to find something they care about that is negatively impacted by AI, like their jobs. Remember that the opposing groups will also be recruiting people to their side, so it is crucial you find a sharp reason, a specific thing that aligns with something that subgroup does care about. It is going to be rare that you can make anyone else care about every single issue you care about, because people can’t care about everything, but if you successfully accomplished something with them before, they might find it prudent to listen to you instead of the other side.

In some cases, if there isn’t an obvious shared value, you may need to offer them something, like joining a different coalition to address one of their key issues. This still doesn’t require you to sacrifice any of your morals - you simply need to find an issue that you both agree on, like universal healthcare or implementing UBI or treating veterans better. In exchange for you working with them in the future on one of those issues, they might be willing to side with you on an issue they essentially have no opinion on. It is not necessary to convince everyone in the entire world to share your exact political opinions, only for them to agree to help you.

Furthermore, after you manage to ban generative AI and you start working on passing UBI, you can go right back to the generative AI supporters. All you need to do is point out that it will be much easier for them to unban generative AI if some form of UBI is passed, and they’ll be willing to help you pass UBI even though you previously worked against them. Despite your past differences, it is still in everyone’s best interest to work together, and no one has to compromise on their morals. You can still oppose generative AI even after it’s proponents help you pass UBI. Nobody needs to compromise on their morals because fundamentally opposed goals can still share some values. It is crucial to recognize when someone you disagree with shares your values in another area of society, and work with them to further that specific value.

This is largely how any functioning political system works. However, leftist circles keep getting hijacked by moral puritans who insist that even working with anyone who has ever done anything slightly bad will somehow “corrupt” the movement and everyone in it will magically turn into witches democrats. This isn’t really possible, because the democrats don’t actually do anything right now, and even worse, it could be a propaganda tactic. The FBI deliberately used similar tactics against the civil rights movement in the 1960s: they would send inflammatory anonymous letters falsely accusing an african-american organization of misusing funds. These were literally the period equivalent of our modern social media callout posts, except now callout posts can come from astroturfed accounts that seem like real people. All these moral puritans insisting that we shouldn’t “compromise our morals” by cooperating with other people might just be Russian agents, or real people manipulated by Russian agents.

Regardless, whether the moral purity panics that repeatedly consume leftist circles are real or astroturfed by Russian propaganda, they cannot be allowed to continue. If leftists want a snowball’s chance in hell of actually stopping the fascists, they must learn how to cooperate with their fellow human beings instead of demanding moral purity that simply serves to destroy their own movement.


The New Discord Overlay Breaks GSync and Borderless Optimizations


The new discord overlay no longer uses DLL injection, and is instead a permanent HWND_TOPMOST window glued to whatever window it happens to think is a game. Ignoring the fact that discord seems to think FL Studio, the minecraft launcher, and SteamVR’s desktop widget are “video games”, the real problem is that this breaks the Borderless Windowed Optimizations, which has the most obvious effect of disabling GSync/FreeSync on all games that the overlay enables itself on.

so it seems it also yoinks gsync, which means the game running underneath doesn't use gsync anymore. Wonderful!

— Tawmy (@tawmy.dev) March 29, 2025 at 11:45 AM

We can tell that it’s a normal window instead of DLL injection by simply finding the window in the win32 UI tree using inspect.exe:

Discord Overlay Window

Interestingly, they still seem to be using a D3D window to render the overlay. This might be a quirk of using Electron, or it might be a result of whatever library they’re using to render the overlay:

Intermediate D3D sibling window?

The reason this breaks everything is because the borderless windowed optimization works using a new flip model called the DXGI flip model. Instead of copying the contents of the backbuffer to another intermediate buffer used by the desktop for compositing, the compositor can use the backbuffer directly when it is compositing. This flip model was augmented in Windows 10 with Direct Flip, which allows this shared surface to bypass the compositor entirely and send frames to the monitor directly:

Depending on window and buffer configuration, it is possible to bypass desktop composition entirely and directly send application frames to the screen, in the same way that exclusive fullscreen does.

All modern gaming is built on top of this key optimization, because it allows seamless Alt-Tab behavior by allowing the DWM compositor to “wake up” and start compositing the screen like a normal application, then “go to sleep” once it knows a single borderless fullscreen application is the only thing rendering to that monitor, by simply piping it’s backbuffer directly to the device. If a combination of DXGI_FEATURE_PRESENT_ALLOW_TEARING and the right VSync mode is enabled, the app can update it’s backbuffer completely out-of-band from the rest of the desktop compositor, which is the only thing that allows GSync/FreeSync to work, as the monitor must sync it’s own refresh rate to whenever the game happens to complete a frame.

If any part of this pipeline is disrupted, it is no longer possible to forward frames to the monitor outside the normal update sequence of the compositor. Many things can break this, like not turning on optimizations for windowed games or having windows fail to recognize something as a game. If the vsync mode is set up wrong, it will break. If the flip mode is wrong, it will break. And most importantly, if even a single pixel of another app is displayed over the game, then in order to display that pixel, the compositor has to composite the window outputs together onto a secondary buffer, which must then be presented at the native refresh rate of the monitor because it has inputs from two different programs at different refresh rates, thus breaking GSync/FreeSync. It will also introduce additional frames of lag even if you don’t use GSync/FreeSync, which you may have noticed when a notification pops up while playing a game and it suddenly felt laggy until the notification went away.

DLL Injection was originally used for in-game overlays because games often used exclusive fullscreen. The drawback of DLL injection is that it crashes games when implemented incorrectly and also makes virus scanners very unhappy. With the new flip models, games don’t need to ask for exclusive fullscreen to get low latency, but they still have to be the only thing on the screen or it doesn’t work. Discord has either ignored why DLL injection was originally used, or decided that the drawbacks of DLL injection aren’t worth it and instead simply broken all the optimizations for windowed games that Microsoft introduced. Any half-decent graphics programmer would know this would happen, so it’s obvious that one of two things happened:

  1. Discord never involved a single graphics dev or gamedev with any experience in how games work about how their new overlay for games would interact with games.
  2. There is a very angry dev stalking the halls of discord HQ right now, cursing at the shadows because she knew. SHE KNEW. SHE WARNED THEM. BUT THEY DIDN’T LISTEN. Her manager probably ignored her warnings, or overruled them, saying “most gamers won’t even notice” or “DLL Injection has too many problems”. And now, if she shares this article with said manager, her manager will look bad and probably try to fire her for the crime of being competent, because that’s how big corporations work.

Usually I default to option (2), but option (1) is also possible if they already laid off the person who knew this would happen last year. But hey, if you are a graphics dev at discord who tried to warn your managers about this trashfire and you got “laid off” under mysterious circumstances, send me a DM on bluesky, I’m always interested in talking to actual competent engineers.

If the new overlay was turned on without your consent (which is what happened to me), you can turn it off again by going to User Settings → Activity Settings → Game Overlay → Enable Overlay. Be careful though, because flipping this option off has crashed the video drivers for two of my friends so far, requiring a full reboot. If you want to uninstall Discord, you can do so from Add/Remove programs, but good luck finding another chat app your friends actually want to use.


Do You Really Think We'll Have Genders In The Future?


Something that is very common is for people pushing the boundaries of technology (or pretending to, anyway) to hold weirdly conservative social views. This is why “techbro” is now a thing, and it’s not really surprising - there are plenty of engineers that are only good at engineering, not participating in a society, or even understanding how human social interactions work. It is kinda weird when Paul Graham does it, though.

That said, when I see futurists or transhumanists talk about a timeline where humans are uploaded or become cyborgs or whatever, and then they turn around and say stuff like “feminism is bad”, I am utterly baffled. This level of cognitive dissonance is a bit hard to swallow, even in our current political quagmire of fake news and conspiracy theories. These futurists are talking about augmenting human minds beyond our current capabilities, of modifying our bodies to do things we could only dream of, and [checks notes] also transgender people are mentally ill. Wait, what?

The moment we get access to any kind of augmentation, we’re not just going to make a perfect human. Nobody even agrees on what the “perfect body” is in the first place, and people’s response to disagreement seems to be to assert that their subjective opinion is the objectively correct one. For most people, I can excuse this as a lack of imagination. Maybe their response is a genuine question: “well, what else would you do? How can a perfect body be anything other than human?”. But we’re talking about futurists here, they’re literally imagining entire new technological paradigms! They definitely have an imagination!

Oftentimes, as so eloquently explained by Philosophy Tube, such viewpoints exist only to shield ourselves from truths we would rather not think about. Just as we construct phantasms to avoid thinking about our own mortality, some people don’t want to confront the possibility that they aren’t actually comfortable in their own body, because there is very little they can do about it right now. If they convince themselves that, given futuristic technology they would simply give themselves a perfect human body and then they would finally be happy with how they look, they don’t have to confront the lingering doubts eating away at the back of their mind - What if I’m still not happy? What if I actually want to be a different gender? What if I’m actually gay? What if I don’t even want a human form?. Even if a technological solution currently exists, many people trap themselves in social structures that would destroy them if they expressed themselves.

One way to salvage this worldview - to rationalize the phantasm - is to argue that, despite all the potential chaos that technology will unleash, only those civilizations who manage to hold on to Heterosexual Western Values will be successful in the long-term, usually backed up with a remarkably bad understanding of statistics. This has several problems, the first being that it has zero historical precedent - successful civilizations are always whoever can socially adapt to new technological paradigms. The other problem is that it ignores who is going to be first in line to get themselves augmented. Some rich people will, sure, but who do you think is going to be most willing to undergo an extremely risky procedure to give themselves a new limb?

Any guesses?

It’s furries. Furries want tails. Normal people cannot understand how badly furries want tails. They will literally invent new technologies just to give themselves tails. The first novel limbs that aren’t replacements are going to be tails invented by furries, and the furries will get them in droves. The first adopters of mechanical augmentation, cyborgs, and mind uploading will be furries, transgender people, and anyone else who doesn’t feel comfortable in their own body. Anyone who has not trapped their dysmorphia in a phantasm to try to escape it. This is going to give them a massive first-mover advantage, which will be incredibly difficult to catch up to because the compounding benefits of augmentation create exponential benefits.

It should be obvious to anyone who has seen technological trends play out that the furries won’t stop at just tails. People will immediately begin augmenting themselves in increasingly exotic ways, bounded only by technological limitations. Good futurists already talk about not just transhuman futures, but posthuman futures, where fragments of humanity inevitably transform into something barely recognizable. This is a fairly common aspect in almost any science fiction talking about what the deep future might look like, because it is logically inescapable. The only constant in nature is change, and so there is simply no possible way that humans would remain static for thousands of years in a civilization that continues to innovate.

All of this gets even more ridiculous if mind-uploading happens soon, because… you’ll be in a computer. You could be literally anything. You don’t even need a physical form anymore. Once mind-uploading happens, do you really think we’ll even have a notion of gender anymore? Of sexual orientation? Archaic notions might survive, but whatever “genders” people choose to be once all physical limitations are removed will be utterly incomprehensible to us. They’ll have much weirder things to debate, like whether or not it’s okay for an uploaded human to have a relationship with an AI with an IQ of 1040. “Genderfluid” isn’t going to scratch the surface of all the weird shit people will get up to.

Perhaps now you might understand why I am so utterly baffled by bigoted futurists, who would not survive in their own predicted futures. They seem to have constructed some kind of phantasm out of their contradictory beliefs, although what frightening truths that phantasm is protecting them against, I can’t say. I wonder if upcoming advances in VR might be more important than we think - perhaps better and more immersive VR could provide people with a safe place to explore alternative physical forms. Maybe then, some people might start to look past the phantasm.


Stop Making Me Memorize The Borrow Checker


I started learning Rust about 3 or 4 years ago. I am now knee-deep in several very complex Rust projects that keep slamming into the limitations of the Rust compiler. One of the most common and obnoxious problems is hitting a situation the borrow-checker can’t deal with and realizing that I need to completely re-architect how my program works, because lifetimes are “contagious” the same way async is. Naturally, Rust has both!

Despite how obviously useful the borrow-checker is in writing correct code, in practice it is horrendous to work with. This is because the borrow checker cannot run until an entire function compiles. Sometimes it seems to refuse to run until my entire file compiles. Because an explicit lifetime must come from somewhere, they have a habit of “floating up” through the stack, from the point of usage to the point of origin, infecting everything in-between with another explicit generic lifetime parameter. If you end up not needing it, you need to go through and delete every instance of this lifetime, which can sometimes be 30 or more generic statements that end up needing to be modified.

In the worst cases, your entire architecture simply cannot work with the borrow checker, and at minimum you’ll need to wrap things in an Rc<>, which again will requiring upwards of 30 or more statements depending on the complexity of your architecture. Other times you realize you need a split borrow, and have to then modify every single function under the split borrow check to take specific field references instead of the original type. These constant refactors have been a major detractor for the language for years, although some improvements, like impl, have reduced the need for refactoring in some narrow cases.

This means, to be a highly productive Rust programmer, you basically have to memorize the borrow checker rules, so you get it right the first time. This is stupid, because the whole point of having a type system or a borrow checker is to tell you when you get it wrong, so you don’t have to memorize how the borrow rules work. I don’t need to memorize how all the types work, because these errors get caught almost immediately, and rarely require massive refactors because the whole architecture doesn’t need to exist before it can identify problems.

This is painful because I am an experienced C++ programmer, and C++ has this exact problem except worse: undefined behavior. In the worst case, C++ simply doesn’t check anything, compiles your code wrong, and then does inexplicable and impossible things at runtime for no discernable reason (or it just deletes your entire function). If you run ubsan (undefined behavior sanitizer), it will at least explode at runtime with an error message. Unfortunately, it can only catch undefined behavior that actually happens, so if your test suite doesn’t cover all your code branches you might have undefined behavior lurking in the code somewhere. Even worse, the very existence of undefined behavior sometimes creates a new branch you couldn’t possibly think of testing without knowing about the undefined behavior in the first place!

This means that in order to write C++, you effectively have to memorize the undefined behavior rules, which sucks. Sound familiar? This is both stupid and strictly worse than Rust, because there is no compile-time error at all, only a runtime error if you get it wrong (and you are running ubsan). However, because it’s a runtime error, correcting it usually requires less total refactoring… usually.

At this point, C++ can’t fix it’s undefined behavior problem because C++ uses undefined behavior to drive optimization, so now it’s just stuck like this forever. Rust can’t really fix borrow checking either, because borrow checking is embedded so deeply into the compiler at this point. All Rust can do is make the borrow checker more powerful (probably by introducing partial borrows, which seems stuck in eternal bikeshedding hell) or introduce more powerful IDE tooling that can make refactors less painful and more automatic, like automatically removing a generic parameter from everywhere it was used.

Problems like these are unfortunate, because it drives people towards using C for it’s “simplicity”, when in reality they are simply deferring logic errors until runtime. I think Rust manages to “get away” with it’s excessive verbosity because “safe C++” is even more horrendously verbose and arcane, and safe C++ is what Rust is really competing against right now. I just think Rust needs more competition.

Any prospective Rust competitor, however, needs to be very cognizant of the tradeoffs they force programmers to make in exchange for correctness. It is not sufficient to invent a language that makes it possible to write provably correct kernel-level code, it has to be easy to use as well, and we really need to get away from indirectly forcing programmers to anticipate what the compiler will do simply to be productive. It’s not the 1970s anymore, writing a program shouldn’t feel like taking a stack of punchcards to the mainframe to see if it works or not. Rust is not the answer, it is simply a step towards the answer.


Avatar

Archive

  1. 2025
  2. 2024
  3. 2023
  4. 2022
  5. 2021
  6. 2020
  7. 2019
  8. 2018
  9. 2017
  10. 2016
  11. 2015
  12. 2014
  13. 2013
  14. 2012
  15. 2011
  16. 2010
  17. 2009