Mathematical Notation Is Awful
Today, a friend asked me for help figuring out how to calculate the standard deviation over a discrete probability distribution. I pulled up my notes from college and was able to correctly calculate the standard deviation they had been unable to derive after hours upon hours of searching the internet and trying to piece together poor explanations from questionable sources. The crux of the problem was, as I had suspected, the astonishingly bad notation involved with this particular calculation. You see, the expected value of a given distribution $$ X $$ is expressed as $$ E[X] $$, which is calculated using the following formula:
That, however, was only the beginning. Another question required them to find the covariance between two seperate discrete distributions, $$ X $$ and $$ Y $$. I have never actually done covariance, so my notes were of no help here, and I was forced to return to wikipedia, which gives this helpful equation.
We’re up to, what, five different ways of saying the same thing? At least, I’m assuming it’s the same thing, but there could be some incredibly subtle distinction between the two that nobody ever explains anywhere except in some academic paper locked up behind a paywall that was published 30 years ago, because apparently mathematicians are okay with this.
Even then, this is just one instance where the ambiguity and redundancy in our mathematical notation has caused enormous confusion. I find it particularly telling that the most difficult part about figuring out any mathematical equation for me is usually to simply figure out what all the goddamn notation even means, because usually most of it isn’t explained at all. Do you know how many ways we have of taking the derivative of something?
$$ f'(x) $$ is the same as $$ \frac{dy}{dx} $$ or $$ \frac{df}{dx} $$ even $$ \frac{d}{dx}f(x) $$ which is the same as $$ \dot x $$ which is the same as $$ Df $$ which is technically the same as $$ D_xf(x) $$ and also $$ D_xy $$ which is also the same as $$ f_x(x) $$ provided x is the only variable, because taking the partial derivative of a function with only one variable is the exact same as taking the derivative in the first place, and I’ve actually seen math papers abuse this fact instead of use some other sane notation for the derivative. And that’s just for the derivative!
Don’t even get me started on multiplication, where we use $$ 2 \times 2 $$ in elementary school, $$ * $$ on computers, but use $$ \cdot $$ or simply stick two things next to each other in traditional mathematics. Not only is using $$ \times $$ confusing as a multiplicative operator when you have $$ x $$ floating around, but it’s a real operator! It means cross product in vector analysis. Of course, the $$ \cdot $$ also doubles as meaning the Dot Product, which is at least nominally acceptable since a dot product does reduce to a simple multiplication of scalar values. The Outer Product is generally given as $$ \otimes $$, unless you’re in Geometric Algebra, in which case it’s given by $$ \wedge $$, which of course means AND in binary logic. Geometric Algebra then re-uses the cross product symbol $$ \times $$ to instead mean commutator product, and also defines the regressive product as the dual of the outer product, which uses $$ \nabla $$. This conflicts with the gradient operator in multivariable calculus, which uses the exact same symbol in a totally different context, and just for fun it also defined $$ * $$ as the “scalar” product, just to make sure every possible operator has been violently hijacked to mean something completely unexpected.
This is just one area of mathematics - it is common for many different subfields of math to redefine operators into their own meaning and god forbid any of these fields actually come into contact with each other because then no one knows what the hell is going on. Math is a language that is about as consistent as English, and that’s on a good day.
I am sick and tired of people complaining that nobody likes math when they refuse to admit that mathematical notation sucks, and is a major roadblock for many students. It is useful only for advanced mathematics that take place in university graduate programs and research laboratories. It’s hard enough to teach people calculus, let alone expose them to something useful like statistical analysis or matrix algebra that is relevant in our modern world when the notation looks like Greek and makes about as much sense as the English pronunciation rules. We simply cannot introduce people to advanced math by writing a bunch of incoherent equations on a whiteboard. We need to find a way to separate the underlying mathematical concepts from the arcane scribbles we force students to deal with.
Personally, I understand most of higher math by reformulating it in terms of lambda calculus and type theory, because they map to real world programs I can write and investigate and explore. Interpreting mathematical concepts in terms of computer programs is just one way to make math more tangible. There must be other ways we can explain math without having to explain the extraordinarily dense, outdated notation that we use.