Erik McClure

Using Data To Balance Your Game: Pony Clicker Analysis


*The only thing more addicting than heroine are numbers that keep getting larger.*

Incrementer and idle games are seemingly simplistic games where you wait or click to increase a counter, then use that counter to buy things to make the counter go up faster. Because of the compounding effects involved, these types of games inevitably turn into a study of growth rates and how different functions interact. Cookie Clicker is perhaps the most well-known, which employs an exponential growth curve for the costs of buildings that looks like this:

\[ Cost_n = Cost_0\cdot 1.15^n \]

Where $$ Cost_0 $$ is the initial cost of the building. Each building, however, has a fixed income, and so the entire game is literally the player trying to purchase upgrades and buildings to fight against an interminable exponential growth curve of the cost function. Almost every single feature added to Cookie Clicker is yet another way to battle the growth rate of the exponential function, delaying the plateauing of the CPS as long as possible. This includes the reset functionality, which grants heavenly chips that yield large CPS bonuses. However, no feature can compensate for the fact that the buildings do not have a sufficient growth rate to keep up with the exponential cost function, so you inevitably wind up in a dead end where it becomes almost impossible to buy anything in a reasonable amount of time regardless of player action.

Pony Clicker is based off Cookie Clicker, but takes a different approach. Instead of having set rates for each building, each building generates a number of smiles based on the number of ponies and friendships that you have, along with other buildings that “synergize” with that building. The more expensive buildings generate more smiles because they have a higher growth rate than the buildings below them. This makes the game extremely difficult to balance, because you only have equations and the cost curves to work with, instead of simply being able to set the per-building SPS. Furthermore, the SPS of a building continues to grow and change over the course of the game, further complicating the balance equation. Unfortunately, in the first version of the game, the growth rate of the end building exceeded the growth rate of the cost function, which resulted in immense end-game instability and all around unhappiness. To address balance problems in pony clicker, rather than simply throwing ideas at the wall and trying to play test them infinitely, I wrote a program that played the game for me. It uses a nearly optimal strategy of buying whatever the most efficient building is in terms of cost per +1 SPS increase. This is not a perfectly optimal strategy, which has to take into account how long the next building will need to take, but it was pretty close to how players tended to play.

Using this, I could analyze a game of pony clicker in terms of what the SPS looked like over time. My first graph was not very promising:

The SPS completely exploded and it was obviously terrible. To help me figure out what was going on, I included a graph of the optimal store purchases and the time until the next optimal purchase. My goal in terms of game experience was that no building would be left behind, and that there shouldn’t be enormous gaps between purchases. I also wanted to make sure that the late game or the early game didn’t take too long to get through.

In addition to this, I created a graph of the estimate SPS generation of each individual building, on a per-friendship basis. This helped compensate for the fact that the SPS changed as the game state itself changed, allowing me to ensure the SPS generation of any one building wasn’t drastically out of whack with the others, and that it increased on a roughly linear scale.

This information was used to balance the game into a much more sane curve:

I then added upgrades to the main graph, and quickly learned that I was giving the player certain upgrades way too fast:

This was used to balance the upgrades and ensure they only gave a significant SPS increase when it was really needed (between expensive buildings, usually). The analysis page itself is available here, so you can look at the current state of pony clicker’s growth curve.

These graphs might not be perfect, but they are incredibly helpful when you are trying to eliminate exponential explosions. If you got a value that spirals out of control, a graph will tell you immediately. It makes it very easy to quickly balance purchase prices, because you can adjust the prices and see how this affects the optimal gameplay curve. Pony Clicker had so many interacting equations it was basically the only way i could come up with a game that was even remotely balanced (although it still needs some work). It’s a great example of how important rapid feedback is when designing something. If you can get immediate feedback on what changing something does, it makes the process of refining something much faster, which lets you do a better job of refining it. It also lets you experiment with things that would otherwise be way too complex to balance by hand.


Does Anyone Actually Want Good Software?


Are there any programmers left that actually care about writing good software? As far as I can tell, the software development industry has turned into a series of echo chambers where managers scream about new features and shipping software and analyzing feedback from customers. Then they ignore all the feedback and implement whatever new things are supposed to be cool, like flat design, or cloud computing, or software as a service.

The entire modern web is built on top of the worst programming language that’s still remotely popular. It’s so awful that IE now supports asm.js just so we can use other languages instead. With everyone relentlessly misquoting “Premature optimization is the root of all evil”, it’s hard to get programmers to optimize any of their code at all, let alone get them to care about things like CPU caches and why allocation on the heap is slow and how memory locality matters.

Some coders exist at large corporations that simply pile on more and more lines of code and force everyone to use gigantic frameworks built on top of more gigantic frameworks built on top of even more gigantic frameworks and then wonder why everything is so slow. Other coders exist in startups that use Scala/Hadoop/Node.js and care only about pumping out features or fixing bugs. The thing is, all of these companies make a lot of money, which leads me to ask, does anyone actually want good software anymore?

Do customers simply not care? Is everyone ok with Skype randomly not sending messages and trying (poorly) to sync all your messages and randomly deciding that certain actions are always unread on other computers and dropping calls and creating all sorts of other strange and bizarre bugs? Is everyone ok with an antivirus that demands you sign in to a buggy window that keeps losing focus every time you try to type in your password? Is everyone ok with Visual Studio deciding it needs to open a text file and taking 15 seconds to actually start up an entirely new instance even though I already have one running just to display the stupid file?

It seems to me that we’re all so obsessed with making cool stuff, we’ve forgotten how to make stuff that actually works.

Did you know that every single person I know (except for two people) hates flat design? They don’t like it. I don’t like it. There’s a bunch of stuckup, narcissistic designers shoving flat design down everyone’s throats and I hate it. The designers don’t care. They insist that it’s elegant and modern and a bunch of other crap that’s all entirely subjective no matter how hard they try to pretend otherwise. Design is about opinions. If I don’t like your design, you can’t just go and say my opinion is wrong. My opinion isn’t wrong, I just don’t agree with you. There’s a difference.

However, it has become increasingly apparent to me that opinions aren’t allowed in programming. I’m not allowed to say that garbage collectors are bad for high performance software. I’m not allowed to say that pure functional programming isn’t some kind of magical holy grail that will solve all your problems. I’m not allowed to say that flat design is stupid. I’m definitely not allowed to say that I hate Python, because apparently Python is a religion.

Because of this, I am beginning to wonder if I am simply delusional. Apparently I’m the only human being left on planet earth who really, really doesn’t like typing magical bullshit into his linux terminal just to get basic things working instead of having a GUI that wasn’t designed by brain-dead monkeys. Apparently, I’m the only one who is entirely willing to pay money for services instead of having awful, ad-infested online versions powered by JavaScript™ and Node.js™ that fall over every week because someone forgot to cycle the drives in a cloud service 5000 miles away. Apparently, no one can fix the audio sample library industry or the fact that most of my VSTi’s manage to use 20% of my CPU when they aren’t actually doing anything.

Am I simply getting old? Has the software industry left me behind? Does anyone else out there care about these things? Should I throw in the towel and call it quits? Is the future of software development writing terrible monstrosities held together by duct tape? Is this the only way to have a sustainable business?

Is this the world our customers want? Because it sure isn’t what I want.

Unfortunately, writing music doesn’t pay very well.


How Not To Install Software


It’s that time of the year again, when everyone and their pony puts on a sale, except now it seems to have started much earlier than the traditional Black Friday. Needless to say, this is the only time of year I go around buying expensive sample libraries. One of these libraries was recommended by a friend: LA Scoring Strings - First Chair 2, a cheaper version of LA Scoring Strings. It’s $100 off, which is pretty nice, except that the page that describes the product doesn’t actually have a link to buy it. You have to click the STORE link, and then buy it from there, because this is a completely obvious and intuitive interface design (it isn’t).

So, after finding the proper link in the Store and verifying I am actually purchasing what I want to purchase, they give me exactly one payment option: Paypal. This is pretty common, so we’ll let it slide, and the whole process seems to go smoothly until they give me my receipt. On the receipt page, they gave me a link to download the files, and a serial number. How helpful! Until I click the download link, which does not open in a new window, and instead opens a completely different webpage with absolutely no way to get back to the page I was just on, because there is no store page with this information and I have no user account on this site. So, I have to go to my e-mail, where they have helpfully e-mailed me a copy of the receipt (probably for this exact reason) to get the serial number.

I then go back to the download page only to discover that I am required to use their stupid download manager in order to download the product I just bought. There is no alternative option whatsoever. So I download their stupid download manager and it magically installs itself somewhere on my computer I will likely never be able to find because it never asked my permission to do anything, and then demands that I log in. Well, obviously, I don’t have a log in, and no one asked me to register until now, so I go to register, which helpfully opens my web browser to register… on a forum. Well, ok, so I register on the forum with a randomly generated password, and activate my account.

So naturally, they then e-mail my password back to me, which by definition means they are storing it in plaintext. So now the password to my account was sent over an unencrypted, entirely open channel, which is insanely stupid, but this is just a sample library, so whatever. I go back to their download manager and put in my credentials and… the login fails. Well, maybe it takes a bit to propagate - no, it just isn’t working. I try again, and triple check that I have the password right. I log out and back into the forum with that very same password, and it still works. It just doesn’t work in the application.

Standard procedure at this point is for me to take every single weird punctuation character out of my password (making it much weaker) to address the possibility that these people are pants on head retarded and can’t handle a password with punctuation in it. I change my password to an alphanumeric one, and lo and behold, I can suddenly log in to the download manager! Let’s think about this for a moment. The password I used had some punctuation characters in it (like “!&#*@(?” etc.), but in order to make sure it was still a valid password, I logged in to the forum with that password, and it succeeded. I then went to this application and put in the same password and it failed to log me in, which means the program actually only accepts some random subset of all valid passwords that the forum lets you register with.

This is laughably bad programming, but my woes aren’t over yet. I click the download button only to get this incredibly helpful message: “Cannot connect to download servers.” Pissed off, I go play a game in the hopes that once I get back, the servers will work again. I close the game only to discover that my download manager is one giant grey screen no matter what i do to it. It’s forgotten how to draw it’s own UI at this point. I restart the program, and it has (of course) helpfully forgotten my login credentials. This time, it displays a EULA it apparently forgot to show me the first time around, and once I accept, clicking install successfully starts downloading the files!

Of course, once the files are installed, they aren’t actually installed installed. I have to go into Kontakt and add the libraries to it’s magical library in order for them to actually get recognized. I can’t tell if this is AudioBro’s fault or Native Instruments fault, but at this point I don’t care, because this has already become the worst installation experience of any piece of software I have had to go through in my entire life.

What’s frightening is that this is par for the course across the desolate wasteland that is Audio Sample Libraries. The entire audio engineering industry employs draconian and ultimately ineffective DRM security measures, often bundled with installers that look like they were written in 1998 and never updated. The entire industry uses software that is grotesquely bloated, digging it’s filthy claws into my operating system and doing all sorts of unspeakable things, and there is no way out.

You can’t disrupt this field, because samples rule everything. If you have good samples, people will buy your shitty sample libraries. EastWest moved from Kontakt (which is a pretty shitty piece of software considering it’s the best sampler in the entire industry) to their own proprietary PLAY engine, which is unstable, bloated, entirely dependent on ASIO4ALL to even work, and prone to crashing. They still make tons of money, because they have the best orchestral samples, which means people will put up with their incredibly bad sampler just so they can use their samples, which are all in a proprietary format that will get you violently sued if you attempt to reverse engineer it.

So, even if you develop the best sampler in the world, it won’t matter, because without samples, your software is dead on arrival. Almost all the samples that are worth having come in proprietary formats that your program can’t understand, and no one can convert these samples to another format (unless they want to reverse engineer the program and get sued, that is). So now the entire sampling industry is locked in a oligopoly of competing samplers that refuse to talk to each other, thus crushing competition by making the cost of entrance so prohibitively high no one can possibly compete with them. And then you get this shit.


Can We Choose What We Enjoy?


One of the most bizarre arguments I have ever heard in ethics is whether or not people can choose to be gay or not. The idea is, if being gay is genetically predetermined, it’s not their fault, therefore you can’t prosecute them for something they have no control over.

Since when did anyone get to choose what makes them happy? Can you choose to like strawberries? Can you choose to enjoy the smell of dandelions? At best, you can subject yourself to something over and over and over again and enjoy it as a sort of acquired taste, but this doesn’t always work, and the fact remains that you are still predisposed to enjoying certain experiences. Unless we make a concentrated effort to change our preferences, all enjoyable sensory experiences occur without our consent. We are not in charge of what combination of neural impulses our brain happens to find enjoyable. All we can do is slowly influence those preferences, and even then, only sometimes.

This concept of people choosing what they enjoy seems to have infected society, and is often at the root of much bizarre and often unfair prosecution. If we assume that people cannot significantly change the preferences they were dealt by life, either as a result of genetic or environmental influences, a host of moral issues become apparent.

Gender roles stop making sense. In fact, prosecuting anyone on the LGTB spectrum immediately becomes invalid. Attacking anyone’s sexual preferences, provided they are harmless, becomes unacceptable. Trying to attack anyone’s artistic or musical preferences becomes difficult, at best. We know for a fact that someone’s culinary preferences are influenced by the genetic distribution of taste buds in their mouth. It’s even hard to properly critique someone’s fashion choices if they happened to despise denim or some other fabric.

As far as I’m concerned, the answer to the question “why would someone like [x]” is always “because their brain is wired in a way that enjoys it.” Humans are, at a fundamental level, sensory processing machines that accidentally achieved self-awareness. We enjoy something because we are programmed to enjoy it. To insult what kinds of sensory input someone enjoys simply because they do not match up with your own is laughably juvenile. The only time this kind of critique is valid is when someone’s preferences cause harm to another person. We all have our own unique ways of processing sensory input, and so we will naturally enjoy different things, through no fault of our own. Sometimes, with a substantial amount of effort, we can slowly change some of those preferences, but most of the time, we’re stuck with whatever we were born with (or whatever environmental factors shaped our perception in our childhood).

Instead of accusing someone of liking something you don’t approve of, maybe next time you should try to understand why they like it, instead. Maybe you’ll find a new friend.


How To Make Your Profiler 10x Faster


Frustrated with C profilers that are either so minimal as to be useless, or giant behemoths that require you to install device drivers, I started writing a lightweight profiler for my utility library. I already had a high precision timer class, so it was just a matter of using a radix trie that didn’t blow up the cache. I was very careful about minimizing the impact the profiler had on the code, even going so far as to check if extended precision floating point calculations were slowing it down.

Of course, since I was writing a profiler, I could use the profiler to profile itself. By pretending to profile a random number added to a cache-murdering int stuck in the middle of an array, I could do a fairly good simulation of profiling a function, while also profiling the act of profiling the function. The difference between the two measurements is how much overhead the profiler has. Unfortunately, my initial results were… unfavorable, to say the least.

BSS Profiler Heat Output: 
[main.cpp:3851] test_PROFILE: 1370173 µs   [##########
  [code]: 545902.7 µs   [##########
  [main.cpp:3866] outer: 5530.022 ns   [....      
    [code]: 3872.883 ns   [...       
    [main.cpp:3868] inner: 1653.139 ns   [.         
  [main.cpp:3856] control: 1661.779 ns   [.         
  [main.cpp:3876] beginend: 1645.466 ns   [.         
The profiler had an overhead of almost 4 microseconds. When you’re dealing with functions that are called thousands of times a second, you need to be aware of code speed on the scale of nanoseconds, and this profiler would completely ruin the code. At first, I thought it was my fault, but none of my tweaks seemed to have any measureable effect on the speed whatsoever. On a whim, I decided to comment out the actual _querytime function that was calling QueryPerformanceCounter, then run an external profiler on it.
Average control: 35 ns
What?! Well no wonder my tweaks weren’t doing anything, all my code was taking a scant 35 nanoseconds to run. The other 99.9% of the time was spent on that single, stupid call, which also happened to be the one call I couldn’t get rid of. However, that isn’t the end of the story; _querytime() looks like this:
void cHighPrecisionTimer::_querytime(unsigned __int64* _pval)
{
  DWORD procmask=_getaffinity(); 
  HANDLE curthread = GetCurrentThread();
  SetThreadAffinityMask(curthread, 1);
  
  QueryPerformanceCounter((LARGE_INTEGER*)_pval);
  
  SetThreadAffinityMask(curthread, procmask);
}

Years ago, it was standard practice to wrap all calls to QueryPerformanceCounter in a CPU core mask to force it to operate on a single core due to potential glitches in the BIOS messing up your calculations. Microsoft itself had recommended it, and you could find this same code in almost any open-source library that was taking measurements. It turns out that this is no longer necessary:

**Do I need to set the thread affinity to a single core to use QPC?**

No. For more info, see Guidance for acquiring time stamps. This scenario is neither necessary nor desirable.

I couldn’t get rid of the QueryPerformanceCounter call itself, but I could get rid of all that other crap it was doing. I commented it out, and voilà! The overhead had been reduced to a scant 340 nanoseconds, only a tenth of what it had been before. I’m still spending 90% of my calculation time calling that stupid function, but there isn’t much I can do about that. Either way, it was a good reminder about the entire reason for using a profiler - bottlenecks tend to crop up in the most unexpected places.

BSS Profiler Heat Output: 
[main.cpp:3851] test_PROFILE: 142416 µs   [##########
  [code]: 56575.4 µs   [##########
  [main.cpp:3866] outer: 515.43 ns   [....      
    [code]: 343.465 ns   [...       
    [main.cpp:3868] inner: 171.965 ns   [.         
  [main.cpp:3876] beginend: 173.025 ns   [.         
  [main.cpp:3856] control: 169.954 ns   [.         

I also tried adding standard deviation measurements, but that ended up giving me ludicrous values of 342±27348 ns, which isn’t very helpful. Apparently there’s quite a lot of variance in function call times, so much so that while the averages always tend to be the same over time, the statistical variance goes through the roof. This is probably why most profilers don’t include the standard deviation. I was able to add in accurate unprofiled code measurements, though, and the profiler uses a dynamic triple magnitude method of displaying how much time a function takes.


Avatar

Archive

  1. 2025
  2. 2024
  3. 2023
  4. 2022
  5. 2021
  6. 2020
  7. 2019
  8. 2018
  9. 2017
  10. 2016
  11. 2015
  12. 2014
  13. 2013
  14. 2012
  15. 2011
  16. 2010
  17. 2009