Erik McClure

Multithreading Problems In Game Design


A couple years ago, when I first started designing a game engine to unify Box2D and my graphics engine, I thought this was a superb opportunity to join all the cool kids and multithread it. I mean all the other game developers were talking about having a thread for graphics, a thread for physics, a thread for audio, etc. etc. etc. So I spent a lot of time teaching myself various lockless threading techniques and building quite a few iterations of various multithreading structures. Almost all of them failed spectacularly for various reasons, but in the end they were all too complicated.

I eventually settled on a single worker thread that was sent off to start working on the physics at the beginning of a frame render. Then at the beginning of each subsequent frame I would check to see if the physics were done, and if so sync the physics and graphics and start up another physics render iteration. It was a very clean solution, but fundamentally flawed. For one, it introduces an inescapable frame of input lag.

Single Thread Low Load
  FRAME 1   +----+
            |    |
. Input1 -> |    |
            |[__]| Physics   
            |[__]| Render    
. FRAME 2   +----+ INPUT 1 ON BACKBUFFER
. Input2 -> |    |
. Process ->|    |
            |[__]| Physics
. Input3 -> |[__]| Render
. FRAME 3   +----+ INPUT 2 ON BACKBUFFER, INPUT 1 VISIBLE
.           |    |
.           |    |
. Process ->|[__]| Physics
            |[__]| Render
  FRAME 4   +----+ INPUT 3 ON BACKBUFFER, INPUT 2 VISIBLE

Multi Thread Low Load
  FRAME 1   +----+
            |    | 
            |    |
. Input1 -> |    | 
.           |[__]| Render/Physics START  
. FRAME 2   +----+        
. Input2 -> |____| Physics END
.           |    |
.           |    | 
. Input3 -> |[__]| Render/Physics START
. FRAME 3   +----+ INPUT 1 ON BACKBUFFER
.           |____|
.           |    | PHYSICS END
.           |    | 
            |____| Render/Physics START
  FRAME 4   +----+ INPUT 2 ON BACKBUFFER, INPUT 1 VISIBLE

The multithreading, by definition, results in any given physics update only being reflected in the next rendered frame, because the entire point of multithreading is to immediately start rendering the current frame as soon as you start calculating physics. This causes a number of latency issues, but in addition it requires that one introduce a separated “physics update” function to be executed only during the physics/graphics sync. As I soon found out, this is a massive architectural complication, especially when you try to put in scripting or let other languages use your engine.

There is another, more subtle problem with dedicated threads for graphics/physics/audio/AI/anything. It doesn’t scale. Let’s say you have a platformer - AI will be contained inside the game logic, and the absolute vast majority of your CPU time will either be spent in graphics or physics, or possibly both. That means your game effectively only has two threads that are doing any meaningful amount of work. Modern processors have 8 logical cores1, and the best one currently available has 12. You’re using two of them. You introduced all this complexity and input lag just so you could use 16.6% of the processor instead of 8.3%.

Instead of trying to create a thread for each individual component, you need to go deeper. You have to parallelize each individual component separately, then tie them together in a single-threaded design. This has the added bonus of being vastly more friendly to single-threaded CPUs that can’t thread things (like certain phones), because the parallization goes on at a lower level and is invisible to the overall architecture of the library. So instead of having a graphics thread and a physics thread, you simply call the physics update, then call the graphics update, and inside each physics and graphics update you spawn enough worker threads to match the number of cores you have to work with and concurrently process as much stuff as possible. This eliminates latency problems, complicated library designs, and it scales forever. Even if your initial implementation of concurrency won’t handle 32 cores, because the concurrency is encapsulated inside the engine, you can just go back and change it later without ever having to modify any programs that use the graphics engine.

Consequently, don’t try to multithread your games. It isn’t worth it. Separately parallelize each individual component instead and write your game engine single-threaded; only use additional threads for asynchronous activities like resource loading.


1
The processors actually only have 4 or 6 physical cores, but use hyperthreading techniques so that 8 or 12 logical cores are presented to the operating system. From a software point of view, however, this is immaterial.


Stop Following The Rules


The fact that math, for most people, is about a set of rules, exemplifies how terrible our attempts at teaching it are. A disturbing amount of programming education is also spent hammering proper coding guidelines into students’ heads. Describing someone as a cowboy programmer is often derisive, and wars between standards, rules and languages rage like everlasting fires. It is into these fires we throw the burnt-out husks that were once our imaginations. We have taken our holy texts and turned them into weapons to crush any remnants of creativity that might have survived our childhood’s educational incarceration.

Math and programming are not sets of rules to be followed. Math is a language - an incredibly dense, powerful way of conveying ideas about abstraction and generalization taken to truly astonishing levels. Each theorem is another note added to a chord, and as the chords play one after another, they build on each other, across instruments, to form a grand symphony. Math, in the right hands, is the language of problem solving. Most people know enough math to get by. It’s like knowing enough French to say hello, order food, and call a taxi. You don’t really know the language, you’re just repeating phrases to accomplish basic tasks. Only when you have mastered a certain amount of fluency can you construct your own epigraphs, and taste the feeling of putting thoughts into words.

With the proper background, Math becomes a box of legos. Sometimes you use the legos to solve problems. Other times you just start playing around and see what you can come up with. Like any language, Math can do simple things, like talk about the weather. Or, you can write a beautiful novel with words that soar through the reader’s imagination. There are many ways to say things in Math. Perhaps you want to derive the formula for the volume of a sphere? You can use geometry, or perhaps calculus, or maybe it would be easier with spherical coordinates. Math even has dialects, there being many ways of writing a derivative, or even a partial derivative (one of my professors once managed to use three in a single lecture). As our mathematical vocabulary grows, we can construct more and more elegant sentences and paragraphs, refining the overall structure of our abstract essay.

Programming too, is just a language, one of concurrency, functions and flow-control. Programming could be considered a lingual descendant of Math. Just as English is Latin-based, so is programming a Math-based language. We can use it to express many arcane designs in an efficient manner. Each problem has many different solutions in many different dialects. There’s functional programming and procedural programming and object-oriented programming. But the programming community is obsessed with solving boring problems and writing proper code. Too overly concerned about maintainability, naming conventions and source control. What constitutes “common sense” varies wildly depending on your chosen venue, and then everyone starts arguing about semicolons.

Creativity finds little support in modern programming. Anything not adhering to strict protocols is considered useless at best, and potentially damaging at worst. Programming education is infused with corporate policy, designed to teach students how to behave and not get into trouble. Even then, its terribly inconsistent, with multiple factions warring with each other over whose corporate policies are superior. Programming languages are treated more like religions than tools.

The issue is that solving new problems, by definition, requires creative thinking. Corporate policy designed to stamp out anything not adhering to “best practices” is shooting itself in the foot, because it is incapable of solving new classes of problems that didn’t exist 5 years ago. Companies that finally succeed in beating the last drop of creativity out of their employees suddenly need to hire college graduates to solve new problems they don’t know how to deal with, and the cycle starts again. We become so obsessed with enforcing proper code etiquette that we forget how to play with the language. We think we’re doing ourselves a favor by ruthlessly enforcing strict coding guidelines, only to realize our code has already become irrelevant.

We need to separate our mathematical language from the proof. Just as there is more to English than writing technical specifications, there is more to Math than formal research papers, and more to programming than writing mission-critical production code. Rules and standards are part of a healthy programming diet, but we must remember to take everything in moderation. We can’t be so obsessed with writing standardized code that we forget to teach students all the wonderful obscurities of the language. We can’t be telling people to never use a feature of a programming language because they’ll never use it properly. Of course they won’t use it properly if they can’t even try! We should not only be teaching programmers the importance of formality, but where it’s important, and where it’s not. We should encourage less stringent rules on non-critical code and small side projects.

In mathematics, one never writes a proof from beginning to finish. Often you will work backwards, or take shortcuts, until you finally refine it to a point where you can write out the formal specification. When messy code is put into production, it’s not the programmer’s fault for being creative, it’s the idiot who didn’t refactor it first. Solving this by removing all creativity from the entire pipeline is like banning cars to lower the accident rate.

Corporate policy is for corporate code, not experimental features. Don’t let your creativity die. Stop following the rules.


Why Windows 8 Does The Right Thing The Wrong Way


Yesterday, I saw a superb presentation called “When The Consoles Die, What Comes Next?” by Ben Cousins. It demonstrates that mobile gaming is behaving as a disruptive technology, and is causing the same market decline in consoles that consoles themselves did to arcades in the 1990s. He also demonstrates how TV crushed cinema in a similar manner - we just don’t think of it like that because we don’t remember back when almost 60% of the population was going to the movie theaters on a weekly basis. Today, most people tend to go to the movie theater as a special occasion, so the theaters didn’t completely die out, they just lost their market dominance. The role the movie theater played changed as new technology was introduced.

The game industry, and in fact the software industry as a whole, is in a similar situation. Due to the mass adoption of iPads and other tablets, we now have a mobile computing experience that is distinct from that of say, a console, or even a PC. Consequently, the role of consoles and PCs will shift in response to this new technology. However, while many people are eager to jump on the bandwagon (and it’s a very lucrative bandwagon), we are already losing sight of what will happen to stabilize the market.

People who want to sound futuristic and smart are talking about the “Post-PC Era”, which is a very inaccurate thing to say. PCs are clearly very useful for some tasks, and its unlikely that they will be entirely replaced by mobile computing, especially when screen real-estate is so important to development and productivity, and the difficulty of replicating an ergonomic keyboard. The underlying concept of a PC, in that you sit down at it, have a keyboard and mouse and a large screen to work at, is unlikely to change significantly. The mouse will probably be replaced by adaptive touch solutions and possibly gestures, and the screen might very well turn into a giant glass slab with OLEDs on it, or perhaps simply exist as the wall, but the underlying idea is not going anywhere. It will simply evolve.

Windows 8 is both a surprisingly prescient move on part of Microsoft, and also (not surprisingly) a horrible train wreck of an OS. The key concept that Microsoft correctly anticipated was the unification of operating systems. It is foolish to think that we will continue on with this brick wall separating tablet devices and PCs. The difference between tablets and PCs is simply one of both user interface and user experience. These are both managed by the highest layers of complexity in an operating system, such that it can simply adapt its presentation to suit whatever device it is currently on. It will have to once we introduce monitors the size of walls and OLED cards with embedded microchips. There will be such a ridiculous number of possible presentation mediums, that the idea of a presentation medium must be generalized such that a single operating system can operate on a stupendous range of devices.

This has important consequences for the future of software. Currently we seem to think that there should be “tablet versions” of software. This is silly and inconvenient. If you buy a piece of software, it should just work, no matter what you put it on. If it finds itself on a PC, it will analyze the screen size and behave appropriately. If its on a tablet, it will enable touch controls and reorganize the UI appropriately. More importantly, you shouldn’t have to buy a version for each of your devices, because eventually there won’t be anything other than a computer we carry around with us that plugs into terminals or interacts with small central servers at a company.

If someone buys a game I make, they own a copy of that game. That means they need to be able to get a copy of that game on all their devices without having to buy it 2 or 3 times. The act of buying the game should make it available to install on any interactive medium they want, and my game should simply adapt itself to whatever medium is being used to play it. The difference between PC and tablet will become blurred as they are reduced to simply being different modes of interaction, with the same underlying functionality.

This is what Microsoft is attempting to anticipate, by building an operating system that can work on both a normal computer and a tablet. They even introduce a Windows App Store, which is a crucial step towards allowing you to buy a program for both your PC and your tablet in a single purchase. Unfortunately, the train-wreck analogy is all too appropriate for describing the state of Windows 8. Rather than presenting an elegant, unified tablet and PC experience, they smash together two completely incompatible interfaces in an incoherent disaster. You are either presented with a metro interface, or a traditional desktop interface, with no in-between. The transition is about as smooth as your head smashing against a brick wall. They don’t even properly account for the fact that their new metro start menu is terribly optimized for a mouse, but try to make you use it anyway. It does the right thing, the wrong way.

The game industry has yet to catch on to this, since one designs either a “PC game” or a “mobile game”. When a game is released on a tablet, it’s a special “mobile version”. FL Studio has a special mobile version. There is no unification anywhere, and the two are treated as separate walled gardens. While this is currently an advantage during a time where tablets don’t have the kind of power a PC does, it will quickly become a disadvantage. The convenience of having familiar interfaces on all your devices, with all of the same programs, will trump isolated functionality. There will always be games and programs more suited to consoles, or to PCs, or to tablets, but unless we stop thinking of these as separate devices, and instead one of many possible user experiences that we must adapt our creations to, we will find ourselves on the wrong side of history.


Visual Studio Broke My Computer


So I’d been using the developer preview of VS11 and liked some of its improvements. When the desaturated VS11 beta came out, I hated the color scheme but decided I still wanted the upgraded components, so I went to install VS11 beta. Unfortunately the beta only lets you change its install location if the preview developer preview isn’t installed, and the developer preview had installed itself into C:\ without ever letting me change the path, which was annoying. So I took the opportunity to fix things and uninstalled the developer preview, then installed the beta of VS11.

Everything was fine and dandy until I discovered that VS11 wasn’t compiling C++ DLLs that worked on XP. I don’t know how it managed to do this, since the DLL had no dependencies whatsoever, and that bug was only supposed to affect MFC and other windows related components and hence there was no windows flag for me to specify which version I wanted, but just to be sure I decided to try and compile it in VS2010. It was at this point I discovered that VS2010 could no longer open any projects at all. It was broken. Further investigation revealed that uninstalling VS11 developer preview will break VS2010. Now, I had an ultimate version of VS2010 I’ve had sitting around for a while I got from Dreamspark, so I figured I could just uninstall VS2010 and then reinstall the ultimate version and that would kill any remaining problems the pesky 2011 beta introduced.

The thing is, I can’t uninstall the SP1 update from VS2010. Not before I uninstalled VS2010, not after I uninstalled it, not even after I installed the ultimate version. It just gave me this:

*The removal of Microsoft Visual Studio 2010 Service Pack 1 may put this computer in an state in which projects cannot be loaded and Service Pack 1 cannot be reinstalled. For instructions about how to correct the problem, see the readme on the Microsoft Download Center website.*

So I just had to leave the Service Pack alone and attempted to re-apply it after installing VS2010 Ultimate, but the online installer failed. So then I downloaded the SP1 iso file and installed that. It failed too, but this time I could fix the problem - someone had forgotten to copy the F#_redist MSI file to the TEMP directory, instead only copying the CAB file. Note that I don’t even have F# installed.

I was able to resolve that problem and finished installing the service pack, but to no avail. Both the VS2010 installation and the service pack had forgotten to install the C++ standard library headers, which, as you can imagine, are kind of important. I searched around for a solution, but the only guy who had the same problem as me had simply reformatted and reinstalled windows (note the moderator’s excellent grasp of english grammar). The only thing I had to go off of was using a special utility they built to uninstall all traces of VS2010 from your computer. Unfortunately, the utility doesn’t actually succeed in uninstalling everything, and also doesn’t uninstall SP1, so you have to uninstall SP1 first before running the utility. The problem is, I can’t uninstall SP1 or I’ll never be able to install it again.

At this point it appears I am pretty much fucked. How does Microsoft consider this an acceptable scenario? I worked as an intern at Microsoft once, I know they use their own development tools. I used tools that hadn’t even been released yet. There was one guy on our team whose entire job was just the setup. And yet, through a series of astonishingly bad failures, any one of which being fixed would have prevented this scenario, my computer is now apparently fucked, and I’m going to have to upgrade my windows installation to 64 bit a lot sooner than I wanted.

EDIT: After using the uninstall tool to do a full uninstall and uninstalling SP1 and manually finding any VC10 related registry entries in the registry and deleting them, then reinstalling everything from scratch, I solved the header file problem (but had to reinstall SP1 or it wouldn’t let me open my project files). However then the broken VCTargetsPath problem showed up again, which a repair didn’t fix. I finally fixed the issue by finding someone else with a working installation of VC10, having them export their MSBuild registry key and manually merging it into my registry. If you have this problem, I’ve uploaded the registry key (which should be the same for any system, XP or 7) here. If you have a 64-bit machine, you may need to copy its values into the corresponding WoW64 nodes (just search for a second instance of MSBuild in your registry).


Implicit UI Design


For a long time, I have been frustrated with the poor UI design that is rampant in the software industry. As a consequence, many Linux enthusiasts have pointed out how productive you can be with Emacs, VIM, and other keyboard-shortcut/terminal oriented software. The UI design has gotten so bad, I have to agree that in comparison to recent user interface designs, keyboard shortcuts are looking rather appealing. This, however, doesn’t mean that one approach is inherently better than another, simply that modern user interfaces suck.

In this blog, I’m going to outline several improvements to user interfaces and the generalized alternative design philosophy that is behind many of them. To start, let’s look at this recent article about Visual Studio 11, which details Microsoft’s latest strategy to ruin their products by sucking all the color out of them:

Visual Studio 2010Visual Studio 2010
Visual Studio 11Visual Studio 11

See, color is kind of important. Notably, color is how I find icons in your toolbar. To me, color is a kind of filter. If I know what color(s) are most prominent in a given icon, I can mentally filter out everything else and only have to browse through a much smaller subset of your icons. Combined with having a vague notion of where a given icon is, this usually reduces the number of icons I have to mentally sift through to only one or two.

Color FilteringMental color and spatial filtering

If you remove color, I am left to only my spatial elimination, which can make things extremely annoying when there are a bunch of icons right next to each other. Microsoft claims to have done a study that shows that this new style does not harm being able to identify a given icon in terms of speed, but fails to take into account the mental tax that goes into finding an icon without color.

While we understand that opinions on this new style of iconography may vary, an icon recognition study conducted with 76 participants, 40 existing and 36 new VS users, showed no negative effect in icon recognition rates for either group due to the glyph style transition. In the case of the new VS users they were able to successfully identify the new VS 11 icons faster than the VS 2010 icons (i.e., given a command, users were faster at pointing to the icon representing that command).
You may still be able to find the icon, especially after you’ve been forced to memorize its position just to find it again, and do it reasonably fast, but whether you like it or not, the absence of color will force your brain to process more possible icons before settling down on the one you actually want. When I’m compiling something, I only have a vague notion of where the start button actually is. I don’t need to know exactly where it is or even what it looks like; it’s extremely easy to find since its practically the only green button on the entire toolbar. Same goes for save and open. The uniform color makes the icons very easy to spot, and all the other icons are immediately discarded.

That said, there are many poorly design icons in Visual Studio. Take this group of icons: Bad Icons!

I don’t really know what any of those buttons do. One of them is some sort of toolbox and one is probably the properties window, but they have too many colors. It becomes, again, mentally taxing when even looking at those icons because there is too much irrelevant information being processed. In this scenario, the colors are detrimental to the identification of the buttons. This is because there is no single dominant color, like on the play button, or the save button. Hence, the best solution is to design icons with principle colors, such that some or all the icon is one color and the rest is greyscale. The brain edits out the greyscale and allows identification by color, followed by location, followed by actual glyph shape. To avoid overloading the user with a rainbow of colors, consider scaling the amount of color to how likely a given button is to be used. Desaturate or shrink the colored area of buttons that are much less important. Color is a tool that can be used correctly or incorrectly - that doesn’t mean you should just throw it away the first time you screw up.

We can make better decisions by taking into account how the user is going to use the GUI. By developing an awareness of how a user interface is normally used, we develop vastly superior interactions that accelerate, rather than impede, workflow. To return to visual studio, let’s take a look at a feature in the VS11: pinning a variable preview. Variable PreviewPinned Variable Preview

This is a terrible implementation for a number of reasons. First, since the pin button is all the way on the other end, it is a moving target and you’ll never really be sure where it is until you need to pin something. Furthermore, you can drag the pinned variable around, and you’ll want to after Visual Studio moves it to a seemingly random location that is almost always annoying (but only after the entire IDE locks up for 3 seconds because you haven’t done it recently). When would a user be dragging a variable around? Only when its pinned. A better implementation is to make a handle on the left side of any variable preview. If you click the handle (and optionally drag the variable around), it is implicitly converted to a pinned variable without changing anything else, and a close button appears to the left of the handle.

Better Variable PreviewBetter Pinned Preview

This is much easier to use, because it eliminates a mouse click and prevents the variable from moving to some random location you must then locate afterwards to move it to your actual desired location. By shifting the close button to the left, it is no longer a moving target. To make this even better, you should make sure previews snap to each other so you can quickly build a list of them, and probably include a menu dropdown by the close button too.

We have just used Implicit UI Design, where instead of forcing the user to explicitly specify what they want to have happen, we can use contextual clues to imply a given action. We knew that the user could not possibly move a variable preview without wanting to pin it, so we simply made the act of moving the preview, pin it. Another example is docking windows. Both Adobe and Visual Studio are guilty of trying to make everything dockable everywhere without realizing that this is usually just extremely annoying, not helpful. I mean really, why would I want to dock the find window?

Bad Find Window!
Goddamn it, not again

Even if I was doing a lot of find and replace, it just isn’t useful. You can usually cover up half your code while finding and replacing without too much hassle. The only thing this does is make it really hard to move the damn window anywhere because if you aren’t careful you’ll accidentally dock it to the toolbar and then you have to pull the damn thing out and hope nothing else blew up, and if your really unlucky you’ll have to reconstruct the whole freaking solution window.

That isn’t helpful. The fact that the act of docking and undocking can be excruciatingly slow makes things even worse and is inexcusable. Only a UI that is bloated beyond my ability to comprehend could possibly have such a difficult time docking and undocking things, no doubt made worse by their fetishization of docking windows. Docking windows correctly requires that you account for the extremely common mistake of accidentally undocking or docking a window where you didn’t want it. If a mistake is so common and so annoying, you should make it either much harder to do, or make it very easy to undo. In this case, you should remember where the window was docked last (or not docked), and make a space for it to be dropped into, instead of forcing the user to figure out which magic location on the screen actually docks the window to the right position (which sometimes involves differences of 2 or 3 pixels, which is incredibly absurd).

Spot Holding
Spot Holding

Panels are related to docking (usually you can dock something into a collapsible panel), but even these aren’t done very efficiently. If you somehow manage to get the window you want docked into a panel, it defaults to pinning the entire group of panels, regardless of whether they were pinned before or not. I just wanted to pin one!

Pin Disaster
You have to drag the thing out and redock it to pin just that one panel.

There is a better way to do this. If we click on a collapsible panel, we know the user wants to show it. However, the only reason they even need to click on the panel is because we don’t show it immediately after the mouse hovers over the button. This time should be less than a tenth of a second, and it should immediately close if the mouse gets too far away. It should stay open if the mouse is close enough that the user might have temporarily left the window but may want to come back in. Hovering over another panel button immediately replaces the current panel (in this case, at least), and dragging the panel title bar or the panel button lets you dock or undock.

Now the user will never need to click the panel to make it show up, so we can make that operation do something else. Why not make clicking a panel pin it open? And don’t do any of that “pin the entire bunch of panels” crap either, just pin that one panel and have it so the other panels can still pop up over it. Then, if you click the panel button again, it’s unpinned. This is so much better than the clunky UI interfaces we have right now, and we did it by thinking about Implicit UI Design. By making the mouse click redundant, we could take that to imply that the user wants the panel to be pinned. Moving the mouse far away from the panel implies that the panel is no longer useful. To make sure a mistake is easy to correct, pinning a panel should be identical to simply having it be hovered over indefinitely, and should not change the surrounding UI in any way. Then a mistake can simply be undone by clicking the panel button again, which is a much larger target than a tiny little pin icon. Combine this with our improved docking above, so that a mistakenly undocked panel, when clicked and dragged again, has its old spot ready and waiting in case you want to undo your mistake.

Panel holding

It’s 2012. I think its high time our user interfaces reflected that.


Avatar

Archive

  1. 2025
  2. 2024
  3. 2023
  4. 2022
  5. 2021
  6. 2020
  7. 2019
  8. 2018
  9. 2017
  10. 2016
  11. 2015
  12. 2014
  13. 2013
  14. 2012
  15. 2011
  16. 2010
  17. 2009