Erik McClure

Stop Making Me Memorize The Borrow Checker


I started learning Rust about 3 or 4 years ago. I am now knee-deep in several very complex Rust projects that keep slamming into the limitations of the Rust compiler. One of the most common and obnoxious problems is hitting a situation the borrow-checker can’t deal with and realizing that I need to completely re-architect how my program works, because lifetimes are “contagious” the same way async is. Naturally, Rust has both!

Despite how obviously useful the borrow-checker is in writing correct code, in practice it is horrendous to work with. This is because the borrow checker cannot run until an entire function compiles. Sometimes it seems to refuse to run until my entire file compiles. Because an explicit lifetime must come from somewhere, they have a habit of “floating up” through the stack, from the point of usage to the point of origin, infecting everything in-between with another explicit generic lifetime parameter. If you end up not needing it, you need to go through and delete every instance of this lifetime, which can sometimes be 30 or more generic statements that end up needing to be modified.

In the worst cases, your entire architecture simply cannot work with the borrow checker, and at minimum you’ll need to wrap things in an Rc<>, which again will requiring upwards of 30 or more statements depending on the complexity of your architecture. Other times you realize you need a split borrow, and have to then modify every single function under the split borrow check to take specific field references instead of the original type. These constant refactors have been a major detractor for the language for years, although some improvements, like impl, have reduced the need for refactoring in some narrow cases.

This means, to be a highly productive Rust programmer, you basically have to memorize the borrow checker rules, so you get it right the first time. This is stupid, because the whole point of having a type system or a borrow checker is to tell you when you get it wrong, so you don’t have to memorize how the borrow rules work. I don’t need to memorize how all the types work, because these errors get caught almost immediately, and rarely require massive refactors because the whole architecture doesn’t need to exist before it can identify problems.

This is painful because I am an experienced C++ programmer, and C++ has this exact problem except worse: undefined behavior. In the worst case, C++ simply doesn’t check anything, compiles your code wrong, and then does inexplicable and impossible things at runtime for no discernable reason (or it just deletes your entire function). If you run ubsan (undefined behavior sanitizer), it will at least explode at runtime with an error message. Unfortunately, it can only catch undefined behavior that actually happens, so if your test suite doesn’t cover all your code branches you might have undefined behavior lurking in the code somewhere. Even worse, the very existence of undefined behavior sometimes creates a new branch you couldn’t possibly think of testing without knowing about the undefined behavior in the first place!

This means that in order to write C++, you effectively have to memorize the undefined behavior rules, which sucks. Sound familiar? This is both stupid and strictly worse than Rust, because there is no compile-time error at all, only a runtime error if you get it wrong (and you are running ubsan). However, because it’s a runtime error, correcting it usually requires less total refactoring… usually.

At this point, C++ can’t fix it’s undefined behavior problem because C++ uses undefined behavior to drive optimization, so now it’s just stuck like this forever. Rust can’t really fix borrow checking either, because borrow checking is embedded so deeply into the compiler at this point. All Rust can do is make the borrow checker more powerful (probably by introducing partial borrows, which seems stuck in eternal bikeshedding hell) or introduce more powerful IDE tooling that can make refactors less painful and more automatic, like automatically removing a generic parameter from everywhere it was used.

Problems like these are unfortunate, because it drives people towards using C for it’s “simplicity”, when in reality they are simply deferring logic errors until runtime. I think Rust manages to “get away” with it’s excessive verbosity because “safe C++” is even more horrendously verbose and arcane, and safe C++ is what Rust is really competing against right now. I just think Rust needs more competition.

Any prospective Rust competitor, however, needs to be very cognizant of the tradeoffs they force programmers to make in exchange for correctness. It is not sufficient to invent a language that makes it possible to write provably correct kernel-level code, it has to be easy to use as well, and we really need to get away from indirectly forcing programmers to anticipate what the compiler will do simply to be productive. It’s not the 1970s anymore, writing a program shouldn’t feel like taking a stack of punchcards to the mainframe to see if it works or not. Rust is not the answer, it is simply a step towards the answer.


Rust Async Makes Me Want To Gouge My Eyes Out


One of the most fundamental problems with Rust is the design of Result. It is a lightweight, standardized error return value, similar to C-style error codes but implemented at a type system level that can contain arbitrary information. They are easy to use and very useful, and the ecosystem encourages you to use them over panic! whenever possible. Unfortunately, this ends up creating a problem. Result is not like a C++ exception because it doesn’t contain a stacktrace by default, nor does the compiler have any idea where it was first constructed, unless the error type it contains decides to include that information upon construction by using backtrace.

You can catch an unwinding panic!, which is implemented much more like a C++ exception, but a panic! that aborts cannot be caught and it’s impossible to tell if a panic will unwind at compile-time because the behavior can be changed at runtime. Another interesting difference is that the panic handler is invoked before the panic starts unwinding the stack. These implementation details push you towards using the panic handlers purely as uncaught exception handlers, when you can do nothing but perhaps save critical information and dump a stacktrace before exiting. As a result, panic! is designed for and used almost exclusively for unrecoverable errors.

If you want an error that is recoverable, you use Result… which doesn’t have a backtrace, unless you and all your dependencies are using some variant of anyhow (or it’s various forks), which allows you to add backtrace information. If the error is coming from a dependency that doesn’t use anyhow, you’re screwed. There actually is an RFC to fix this, but it’s been open for six years and shows no signs of being merged anytime soon.

“But”, I hear you ask, “what does this have to do with Rust Async?” Well, like most things, Rust Async makes this annoying part of Rust twice as annoying because the default behavior is to silently eat the error and drop the future, unless you have stored the join handle somewhere, and you are in a position where you can access that join handle to find out what the actual error was. The API for making tokio panic when an unhandled panic happens is still unstable, with the interesting comment of “dropping errors silently is definitely the correct default behavior”. Really? In debug mode? In release mode, fine, that’s reasonable, but if I’ve compiled my program in debug mode I’m pretty sure I want to know if random errors are being thrown. Even with this API change, you’ll have to manually opt-in to it, they won’t helpfully default to this behavior when you compile in debug mode.

Until that feature gets stabilized, you basically have to throw all your JoinHandle’s into a JoinSet blender so you can tell when something errored out, and unless you are extremely sure you didn’t accidentally drop any JoinHandle’s on the floor (because Rust does not warn you if you do this), you probably need a timeout function even after your main future has returned, in case there are zombie tasks that are still deadlocked.

Oh, have I mentioned deadlocks? Because that’s what Rust async gives you instead of errors. Did you forget to await something? Deadlock. Did you await things in the wrong order? Deadlock. Did you forget to store the join handle and an error happened? Deadlock. Did you call a syncronous function that invokes the async runtime 5 layers deep in the callstack because it doesn’t know it’s already inside an async call and you forgot it tried to do that? Deadlock. Did you implement a poll() function incorrectly? Deadlock.

For simple deadlocks, something like tokio-console might be able to tell you something useful (“might” is doing a lot of work here). However, any time you forget to await something, or don’t call join on the localset, or add things to the wrong localset, or your poll function isn’t returning the right value, or the async waker was called incorrectly, you’re just screwed, because there is no list of “pending futures that have not been awaited yet” that you can look through, unless you get saved by your IDE noticing you didn’t await the future, which it often doesn’t. It definitely doesn’t tell you about accidentally dropping a JoinHandle, which is one of the most common issues.

But why would you have to implement a poll function? That’s reserved for advanced users– Nope, nope, actually you have to do that when implementing literally any AsyncRead/AsyncWrite trait. Oh, sorry, there’s actually 4 different possible AsyncRead/AsyncWrite implementations and they’re all slightly different and completely incompatible with each other, but they’re all equally easy to fuck up. Everything in Rust Async is absurdly easy to fuck up, and your reward is always the same: [your-program].exe has been running for over 60 seconds.

I haven’t even mentioned how the tokio and futures runtimes are almost the same but have subtle differences between them, and tokio relies on aspects of futures that have been factored out into future-util, which should be in the standard library but isn’t because the literal only thing they actually standardized on was std::future itself. All this is ignoring the usual complaints about async function color-coding - I’m complaining about obnoxious implementation footguns on top of all the usual annoyances involved with poll-based async. Trying to use async is like trying to use a hammer made out of hundreds of tiny footguns hot-glued together.

I wish async was just one cursed corner of Rust that had its warts relatively self-contained, but that isn’t the case. Rust async is a microcosm of an endless stream of basic usability problems that the language simply doesn’t fix, and might not ever fix. I’m honestly not sure how they’re going to fix the split-borrow problem because the type system isn’t powerful enough to encode where a particular borrow came from, which is required to implement spatially disjoint borrows, which ends up creating an endless cascade of subtle complications.

For example, there are quite a few cases where serde_json errors are not very helpful. None of these situations would matter if you could open a debugger and go straight to what was throwing the error, but you can’t because this is Rust and serde_json doesn’t implement anyhow so you can’t inject any errors. format_serde_error was created to solve this exact problem, but it is no longer maintained and is buggy. Also, artifact dependencies still aren’t stabilized, despite the very obvious use-case of needing to test inter-process communication that comes up in basically any process management framework? So this crazy hack exists instead.

Rust’s ecosystem heavily relies on two undebuggable-by-default constructions: macros and async, which makes actually learning how to debug production Rust code about as fun as pulling your own teeth out. I have legitimately had an easier time hunting down memory corruption errors in C++ then trying to figure out where a particular error is being thrown when it is hidden inside a macro inside an error with no stacktrace information, because C++ has mature tooling for hunting down various kinds of memory errors.

Because of last year’s shenanigans, I am no longer confident that any of these problems will be fixed anymore. Rust’s development has slowed to a crawl, and it seems like it’ll take years to stabilize features like variadic generics, which are currently still in the design phase despite all the problems the ecosystem runs into without them. It is extremely frustrating to see comments saying “oh the ecosystem is just immature” when those comments are 5 years old. On the other hand, I am tired of clueless C or C++ fans trying to throw bricks at Rust over these kinds of problems when C++ has far worse sins. Because of this, I will continue building all future projects in Rust, at least until the dependently typed language I’m working on has a real compiler instead of a half-broken interpreter.

Because hey, at least it isn’t C.


Engineers Only Get Paid If Something Is Broken


Recently, Rust people have been getting frustrated with C developers who seem to base their entire personal identity on being able to do unsafe memory things. The thing is, this is not a phenomenon unique to C developers, or even software developers in general, although the visibility of the kernel C developers is higher than the average programmer. Most people will be upset about technology that makes their particular skillset irrelevent. This is best summarized by a famous quote:

"It is difficult to get a man to understand something, when his salary depends upon his not understanding it" — Upton Sinclair

Professional C developers who dislike Rust’s borrow checker are almost always extremely good at doing manual memory management while making very few detectable errors (I say detectable because they usually miss some edge-case that turns into a security nightmare 10 years later). They are either paid lots of money to do this specific task, or they have almost no experience doing anything else. They are one-trick ponies, and they are terrified of their one skill being made irrelevent, at which point they will either no longer make lots of money, or not have a job at all.

You could argue that they could simply learn Rust, but you must understand that they believe they are singularly talented at C, which in some cases may actually be true. If they start learning Rust now, they might end up just being an average Rust developer, and the thought of being average is absolutely terrifying to them. This is because it’s percieved to be a loss of social status, which human brains are hardwired to avoid at all costs. It sounds like they’re about to be “deported” because that is the exact psychological response that potentially losing social status provokes.

It’s not just languages, either. When programmers ask “why is this ecosystem such a disaster”, half the time it’s because somebody is getting paid to deal with it. Our industry is trapped in an endless loop of a startup building a new technology on top of some half-broken ecosystem, exploding in popularity, and then everyone using the startup’s technology hires people to deal with the ecosystem it’s built on top of, and those people don’t actually want anyone to fix it or they’ll be out of a job. There is no escaping the fact that, if someone was getting paid to deal with something that was broken, and you fix it, you just made them irrelevent.

In 30 years, when Rust is slowly being replaced by something better, Rust developers will behave the exact same way. Someone will invent a borrow checker that is much more powerful and capable of solving most of the annoying borrow situations that baffle the current Rust borrow checker. Their response will be that this language is for “lazy” programmers, who don’t want to be as precise as a Real Rust Programmer. They’ll complain about things that don’t make any sense because they’ve never actually used the language they’re complaining about. The Rust programmers will sound just as dumb as C programmers do today.

I know this will happen because this already happens in literally every other field in existence. Musicians still sometimes claim that if you can’t play an actual instrument you aren’t a “real” musician, whatever that is. There was a big fight when Photoshop came out because artists complained that “ctrl-Z” was cheating and if you can’t paint on a real canvas you aren’t a Real Artist. It’s everywhere, and it’s universal.

This is not a programmer problem, it’s a people problem. When you look at this through the lens of livelihoods being threatened, you can instantly see that this is all the exact same instinctual human reaction: they have a high status because they are incredibly skilled at a particular thing, and New Thing is threatening to make that skill either irrelevent, or less important, and they don’t want to lose status.

The best defense against this behavior seems to be skill generalization and good self-esteem. If you are confident in your abilities as a musician, you don’t need to worry about people who are good at using a sequencer, instead you might try to team up with them. If you are confident in your general problem solving abilities as a programmer, then the language barely matters, what matters is which language is best suited for the problem at hand.

Software engineering in particular seems to suffer from hyper-specialization, with people having jobs working with extremely specific frameworks, like React, or Kubernetes, or whatever the newest Javascript framework is. It might be that the complexity of our problems are outstripping our tool’s abstractions, but regardless of what the cause is, if we don’t get things under control soon, this will just keep getting worse.


Measuring Competence Is Epistemic Hell


Sturgeon’s law states that 90% of everything is crap. Combined with Hanlon’s Razor, we arrive at the inescapable conclusion that most problems are caused by incompetence. What’s particularly interesting is that the number of incompetent people in a system tends to increase the higher up you go. Part of this is due to the Peter Principle, where organizations promote employees until they become incompetent, but this happens in the first place because it becomes harder to measure competence’ the longer it takes the effects of actions to be felt, and as a species we have no way of measuring long-term incompetence. Instead, we rely on social cues, and tend to use whatever our local culture determines is “competent”.

One way to try to address this is to teach better critical thinking, but this almost always runs into fierce objections from parents who don’t want schools to “undermine parental authority”, which is what happened with the 2012 Republican Party of Texas platform (original). This kind of thinking is actually fairly common, and it is not a fluke of human nature - it is a feature.

To understand why humans can be inquisitive and intelligent on an individual level, but follow arbitrary and sometimes counterproductive rituals on a cultural level, you must understand that our ancestors lived in epistemic hell. My favorite example is the tribe that had a very long and complicated ritual for preparing manioc, which contained dangerous amounts of cyanide:

In the Americas, where manioc was first domesticated, societies who have relied on bitter varieties for thousands of years show no evidence of chronic cyanide poisoning. In the Colombian Amazon, for example, indigenous Tukanoans use a multistep, multiday processing technique that involves scraping, grating, and finally washing the roots in order to separate the fiber, starch, and liquid. Once separated, the liquid is boiled into a beverage, but the fiber and starch must then sit for two more days, when they can then be baked and eaten.

[..] even if the processing was ineffective, such that cases of goiter (swollen necks) or neurological problems were common, it would still be hard to recognize the link between these chronic health issues and eating manioc. Low cyanogenic varieties are typically boiled, but boiling alone is insufficient to prevent the chronic conditions for bitter varieties. Boiling does, however, remove or reduce the bitter taste and prevent the acute symptoms.

So, if one did the common-sense thing and just boiled the high-cyanogenic manioc, everything would seem fine. [..] Consider what might result if a self-reliant Tukanoan mother decided to drop any seemingly unnecessary steps from the processing of her bitter manioc. She might critically examine the procedure handed down to her from earlier generations and conclude that the goal of the procedure is to remove the bitter taste. She might then experiment with alternative procedures by dropping some of the more labor-intensive or time-consuming steps. She’d find that with a shorter and much less labor-intensive process, she could remove the bitter taste. Adopting this easier protocol, she would have more time for other activities, like caring for her children. Of course, years or decades later her family would begin to develop the symptoms of chronic cyanide poisoning.

Thus, the unwillingness of this mother to take on faith the practices handed down to her from earlier generations would result in sickness and early death for members of her family. Individual learning does not pay here, and intuitions are misleading. — "The Secret Of Our Success" by Joseph Henrich

Without modern tools, there is no possible way (other than acquiring brain damage from chronic cyanide poisoning), for an ancient human to realize that every step of the ritual is actually necessary, because without extensive experimentation over many human lifetimes, it isn’t obvious what danger the ritual is guarding against, and if it’s working as intended, no one will have seen the danger or be able to know about it in the first place! It seems that evolution always kept around enough sacrificial intelligent humans to tinker with new possible rituals, but always ensured that the majority of the population would obey the safe, known ways of doing things, without questioning them, because trying to rationally evaluate an opaque ritual meant death. Not even the culture itself knew what disaster or point of failure the ritual was actually preventing, only that it kept them alive. Religion is simply a convenient way of packaging rituals; if you look in the rules set out by many ancient religions, a lot of them start looking like “how to run a functioning society” and include things like “keep your toilet clean”. They got popular because they worked, we just had no idea why and in many cases couldn’t have possibly figured out why with the technology at the time. Even worse, if you got it wrong, it could take you decades until you finally manifested an affliction that actually started causing problems.

This is the core evolutionary drive behind religion and conservative mindsets, where obeying authority is paramount to survival. In modern times, we could communicate to our children why doing a particular thing is bad, because we know the entire chain of cause and effect. Just a few hundred years ago, we couldn’t even do that! A famous example is the effort to get iodine added to salt. Doctors didn’t resist the idea of adding iodine to salt for no reason, they resisted it because at every dosage amount that seemed like it could have an effect, it made people sick! They had experiments on fish that showed that iodine seemed to make goiters go away, but giving people iodine supplements would always make them sick. At this point in time, nobody had any evidence whatsoever that micronutrients existed. Giving people just 150 micrograms of iodine a day, accomplished by evenly mixing tiny grains of potassium iodide into a kilogram of salt, seemed like homeopathic medicine. There was no known substance that had any effect at that little concentration. Only by taking a leap of faith could Otto Bayard theorize that perhaps we needed just a tiny amount of iodine, going against all known nutritional science at the time.

Humans likely evolved culture as an alternative to animal’s reliance on old pack members to know what to do in case an extremely rare but devastating event happened every hundred-ish years. Rituals could seem completely nonsensical inside a single human lifespan, because they addressed problems at a societal level that only happened every 200 years, or slow acting chronic issues. In one case, ancient elephant matriarchs were the only ones capable of remembering waterholes far enough from a drought that only happened once every 35 years. The packs that lost their matriarchs all died because they had lost this knowledge.

We evolved logic to solve problems that had clear first-order effects, but we aren’t very good at evaluating second-order effects. Long lived humans were capable of finding cause and effect links that happened over a human lifespan, but only human culture perpetuating strange and bizarre rituals created out of random experimentation could deal with problems that had very long, unknowable cause and effect chains. It is very hard to tell if the person building your house is competent if the house only collapses every 150 years when a massive earthquake hits. Various cultures have developed all sorts of indirect methods of measuring competence, but many of them emphasize students obeying their teachers, because the teachers are often perpetuating rituals that are critically important without actually understanding why the rituals are important or what they guard against. It is culture guarding against Chesterton’s fence over enormous timespans. Another good example of epistemic hell is how we cured scurvy by accident and then ruined the cure:

Originally, the Royal Navy was given lemon juice, which works well because it contains a lot of vitamin C. But at some point between 1799 and 1870, someone switched out lemons for limes, which contain a lot less vitamin C. Worse, the lime juice was pumped through copper tubing as part of its processing, which destroyed the little vitamin C that it had to begin with.

This ended up being fine, because ships were so much faster at this point that no one had time to develop scurvy. So everything was all right until 1875, when a British arctic expedition set out on an attempt to reach the North Pole. They had plenty of lime juice and thought they were prepared — but they all got scurvy. The same thing happened a few more times on other polar voyages, and this was enough to convince everyone that citrus juice doesn’t cure scurvy.

Our ancestors weren’t stupid. They were trying to find some kind of logical progression of cause-and-effect, but they lived in epistemic hell. This is why cargo-cult programming exists. This is why urban legends persist. This is why parents simply want their children to do as they say. This is why we have youtubers chastising NASA for not reading their own Apollo 11 postmortem. This is why corporate procedures emphasize checking boxes instead of critically examining the problem. When your cause-and-effect chain is a hundred steps long and caused by something 5 years ago, economic pressure incentivizes simply trying to avoid blame instead of trying to find the actual systemic problem. The farther up the chain of management a problem is, the longer it takes for the effects to be felt, and the worse we get at finding the root cause. Software engineering has the same issue, where incompetence may only cause performance issues years later, after the original coder has left, and the system has scaled up beyond a critical breaking point. This is why we still don’t know how to hire programmers.

Only in the modern era do we have the necessary technological progress and the historical records to be able to accurately evaluate the usefulness of our rituals. Only now can we watch chemical reactions happen at an atomic level. Only now can we have Just Culture and blameless post-mortems that allows identifying actual systemic failures. Only now can I watch a YouTube video explaining how to go from a quantum simulation of particle collisions to a dynamical fluid simulation. Only now can I watch a slow-motion capture at 200000 frames per second to see exactly how a tiny filament explodes into hot globules that then fly into a nest of zirconium filings and set it aflame exactly where each one lands.

The engineers who invented these flashbulbs couldn’t see any of this. They had to infer it from experimentation, whereas I can just watch it happen and immediately understand exactly what is going on. We live in a pivotal moment of human history, on the cusp of being able to truly understand the entire chain of cause-and-effect for why things happen. We have the ability to measure events with unprecedented accuracy, to tease out tiny differences that catalyze huge reactions.

Unfortunately, the ability to merely see cause-and-effect is not sufficient when large systems tend to be chaotic. We do not yet have good mathematical frameworks for predicting emergent behavior, and our ability to analyze complex chaotic systems is still in its infancy. We know that large groups of humans consistently display emergent behavior, such as crowd dynamics closely following the equations of fluid dynamics. Likewise, large human organizations are themselves largely emergent behavior, and we never really understood how they were working in the first place. Organizational competence, and coordination problems in general are our modern epistemic hell, and it means there is no easy way for us to address the failure of our institutions, because we still have no holistic way to analyze the effectiveness of a given organization.

We are tantalizingly close to grasping the true nature of reality, to having the machinations of the universe laid out before us, but we are still missing the tools to fully analyze subtle patterns, to lift a whispered signal out of the thundering noise of spacetime. There is simply no escape from emergent behaviors evolving out of chaotic systems. Until we have the means to analyze these kinds of complex systems, we will forever be at odds with our nature, still tempted to cling on to superstitions of old, because long ago, that was the only thing that kept us alive.


Discord Should Remove Usernames Entirely


Discord’s Recent Announcement made a lot of people mad, mostly because of Hyrum’s Law - users were relying on unintended observable behavior in the original username system, and are mad that their use-cases are being broken despite very good evidence that the current system is problematic. I think the major issue here is that Discord didn’t go far enough, and as a result, it’s confusing users who are unaware of the technical and practical reasons for the username change, or what a username is even for.

There are several issues being brought up with the username change. One is that users are very upset about usernames being ascii-only alphanumeric, presumably because they do not realize that Discord is only ever going to show their usernames for the purposes of adding friends. Their Display Name is what everyone will normally see, which can be any arbitrary unicode. Discord only spent a single sentence mentioning the problem with someone’s username being written in 𝕨𝕚𝕕𝕖 𝕥𝕖𝕩𝕥 and I think a lot of users missed just how big of a problem this is. Any kind of strange character in a username would be liable to render it completely unsearchable, could easily get corrupted when sent over ascii-only text mediums, and essentially had to be copy+pasted verbatim or it wouldn’t work.

However, some users wanted to be unsearchable, because they had stalkers or were very popular and didn’t want random people finding their discord account. Discriminators and case-sensitivity essentially created a searchability problem which users were utilizing on purpose to make it harder for people to search them. The solution to this is extremely simple, and was in fact a feature of many early chat apps: let the user turn off the ability for people to search for their username. That’s what people actually want.

What discord is trying to do, and communicating incredibly poorly, is transform usernames into friend codes. They say this in a very roundabout way for some reason, and they are also allowing people to essentially reserve custom friend codes. This is silly. Discord should instead replace usernames with friend codes, and provide an opt-in fuzzy search mechanism that tries to find someone based on their Display Name, if users want to be discoverable that way. Discord should let you either regenerate or completely disable your own friend code, if users don’t want random people trying to friend them.

What makes this so silly is that nothing is preventing discord from doing this, because you log in with your e-mail anyway! By replacing usernames with display names, Discord has removed all functionality from them aside from friend codes, so they should just turn usernames into friend codes and stop confusing everyone so much. There is absolutely no reason a user should have to keep track of their username, display name, and server specific nicknames, and letting users reserve custom friend codes is never going to work, because everyone is going to fight over common friend codes. Force the friend codes to be random 10-digit alphanumeric strings. Stop pretending they should be anything else. Stop letting people reserve specific ones.

There is one exception to this that I would tolerate: a custom profile URL. If you wanted to allow people with nitro to, for whatever reason, pay to have a special URL that linked to their profile, this could be done on a first-come first-serve basis, and it would be pretty obvious to everyone why it had to be unique and an ascii-compatible URL.

I’m really tired of companies making a decision for good engineering reasons, and then implementing that decision in the most confusing way possible and blaming anyone who complains as luddites who hate change. There are better ways to communicate these kinds of changes. If your users are confused and angry about it, then it’s your fault, not theirs.


Avatar

Archive

  1. 2024
  2. 2023
  3. 2022
  4. 2021
  5. 2020
  6. 2019
  7. 2018
  8. 2017
  9. 2016
  10. 2015
  11. 2014
  12. 2013
  13. 2012
  14. 2011
  15. 2010
  16. 2009