Erik McClure

Integrating LuaJIT and Autogenerating C Bindings In Visual Studio


Lua is a popular scripting language due to its tight integration with C. LuaJIT is an extremely fast JIT compiler for Lua that can be integrated into your game, which also provides an FFI Library that directly interfaces with C functions, eliminating most overhead. However, the FFI library only accepts a subset of the C standard. Specifically, “C declarations are not passed through a C pre-processor, yet. No pre-processor tokens are allowed, except for #pragma pack.” The website suggests running the header file through a preprocesser stage, but I have yet to find a LuaJIT tutorial that actually explains how to do this. Instead, all the examples simply copy+paste the function prototype into the Lua file itself. Doing this with makefiles and GCC is trivial, because you just have to add a compile step using the -E option, but integrating this with Visual Studio is more difficult. In addition, I’ll show you how to properly load scripts and modify the PATH lookup variable so your game can have a proper scripts folder instead of dumping everything in bin.

Compilation

To begin, we need to download LuaJIT and get it to actually compile. Doing this manually isn’t too difficult, simply open an x64 Native Tools Command Prompt (or x86 Native Tools if you want 32-bit), navigate to src/msvcbuild.bat and run msvcbuild.bat. The default options will build an x64 or x86 dll with dynamic linking to the CRT. If you want a static lib file, you need to run it with the static option. If you want static linking to the CRT so you don’t have to deal with that annoying Visual Studio Runtime Library crap, you’ll have to modify the .bat file directly. Specifically, you need to find %LJCOMPILE% /MD and change it to %LJCOMPILE% /MT. This will then compile the static lib or dll with static CRT linking to match your other projects.

This is a bit of a pain, and recently I’ve been trying to automate my build process and dependencies using vcpkg to act as a C++ package manager. A port of LuaJIT is included in the latest update of vcpkg, but if you want one that always statically links to the CRT, you can get it here.

An important note: the build instructions for LuaJIT state that you should copy the lua scripts contained in src/jit to your application folder. What it doesn’t mention is that this is optional - those scripts contain debugging instructions for the JIT engine, which you probably don’t need. It will work just fine without them.

Once you have LuaJIT built, you should add it’s library file to your project. This library file is called lua51.lib (and the dll is lua51.dll), because LuaJIT is designed as a drop-in replacement for the default Lua runtime. Now we need to actually load Lua in our program and integrate it with our code. To do this, use lua_open(), which returns a lua_State* pointer. You will need that lua_State* pointer for everything else you do, so store it somewhere easy to get to. If you are building a game using an Entity Component System, it makes sense to build a LuaSystem that stores your lua_State* pointer.

Initialization

The next step is to load in all the standard Lua libraries using luaL_openlibs(L). Normally, you shouldn’t do this if you need script sandboxing for player-created scripts. However, LuaJIT’s FFI library is inherently unsafe. Any script with access to the FFI library can call any kernel API it wants, so you should be extremely careful about using LuaJIT if this is a use-case for your game. We can also register any C functions we want to the old-fashioned way via lua_register, but this is only useful for functions that don’t have C analogues (due to having multiple return values, etc).

There is one function in particular that you probably want to overload, and that is the print() function. By default, Lua will simply print to standard out, but if you aren’t redirecting standard out to your in-game console, you probably have your own std::ostream (or even a custom stream class) that is sent all log messages. By overloading print(), we can have our Lua scripts automatically write to both our log file and our in-game console, which is extremely useful. Here is a complete re-implementation of print that outputs to an arbitrary std::ostream& object:

int PrintOut(lua_State *L, std::ostream& out)
{
  int n = lua_gettop(L);  /* number of arguments */
  if(!n)
    return 0;
  int i;
  lua_getglobal(L, "tostring");
  for(i = 1; i <= n; i++)
  {
    const char *s;
    lua_pushvalue(L, -1);  /* function to be called */
    lua_pushvalue(L, i);   /* value to print */
    lua_call(L, 1, 1);
    s = lua_tostring(L, -1);  /* get result */
    if(s == NULL)
      return luaL_error(L, LUA_QL("tostring") " must return a string to "
        LUA_QL("print"));
    if(i > 1) out << "\t";
    out << s;
    lua_pop(L, 1);  /* pop result */
  }
  out << std::endl;
  return 0;
}
To overwrite the existing print function, we need to first define a Lua compatible shim function. In this example, I pass std::cout as the target stream:

int lua_Print(lua_State *L)
{
  return PrintOut(L, std::cout);
}
Now we simply register our lua_Print function using lua_register(L, "print", &lua_Print). If we were doing this in a LuaSystem object, our constructor would look like this:

LuaSystem::LuaSystem()
{
  L = lua_open();
  luaL_openlibs(L);
  lua_register(L, "print", &lua_Print);
}
To clean up our Lua instance, we need to both trigger a final GC iteration to clean up any dangling memory, and then we call lua_close(L), so our destructor would look like this:
LuaSystem::~LuaSystem()
{
  lua_gc(L, LUA_GCCOLLECT, 0);
  lua_close(L);
  L = 0;
}

Loadings Scripts via Require

At this point most tutorials skip to the part where you load a Lua script and write “Hello World”, but we aren’t done yet. Integrating Lua into your game means loading scripts and/or arbitrary strings as Lua code while properly resolving dependencies. If you don’t do this, any one of your scripts that relies on another script will have to do require("full/path/to/script.lua"). We also face another problem - if we want to have a scripts folder where we simply automatically load every single script into our workspace, simply loading them all can cause duplicated code, because luaL_loadfile does not have any knowledge of require. You can solve this by simply loading a single bootstrap.lua script which then loads all your game’s scripts via require, but we’re going to build a much more robust solution.

First, we need to modify Lua’s PATH variable, or the variable that controls where it looks up scripts relative to our current directory. This function will append a path (which should be of the form "path/to/scripts/?.lua") to the beginning of the PATH variable, giving it highest priority, which you can then use to add as many script directories as you want in your game, and any lua script from any of those folders will then be able to require() a script from any other folder in PATH without a problem. Obviously, you should probably only add one or two folders, because you don’t want to deal with potential name conflicts in your script files.

int AppendPath(lua_State *L, const char* path)
{
  lua_getglobal(L, "package");
  lua_getfield(L, -1, "path"); // get field "path" from table at top of stack (-1)
  std::string npath = path;
  npath.append(";");
  npath.append(lua_tostring(L, -1)); // grab path string from top of stack
  lua_pop(L, 1);
  lua_pushstring(L, npath.c_str());
  lua_setfield(L, -2, "path"); // set the field "path" in table at -2 with value at top of stack
  lua_pop(L, 1); // get rid of package table from top of stack
  return 0;
}
Next, we need a way to load all of our scripts using require() so that Lua properly resolves the dependencies. To do this, we create a function in C that literally calls the require() function for us:
int Require(lua_State *L,const char *name)
{
  lua_getglobal(L, "require");
  lua_pushstring(L, name);
  int r = lua_pcall(L, 1, 1, 0);
  if(!r)
    lua_pop(L, 1);
  WriteError(L, r, std::cout);
  return r;
}
By using this to load all our scripts, we don’t have to worry about loading them in any particular order - require will ensure everything gets loaded correctly. An important note here is WriteError(), which is a generic error handling function that processes Lua errors and writes them to a log. All errors in lua will return a nonzero error code, and will usually push a string containing the error message to the stack, which must then be popped off, or it’ll mess things up later.
void WriteError(lua_State *L, int r, std::ostream& out)
{
  if(!r)
    return;
  if(!lua_isnil(L, -1)) // Check if a string was pushed
  {
    const char* m = lua_tostring(L, -1);
    out << "Error " << r << ": " << m << std::endl;
    lua_pop(L, 1);
  }
  else
    out << "Error " << r << std::endl;
}

Automatic C Binding Generation

Fantastic, now we’re all set to load up our scripts, but we still need to somehow define a header file and also load that header file into LuaJIT’s FFI library so our scripts have direct access to our program’s exposed C functions. One way to do this is to just copy+paste your C function definitions into a Lua file in your scripts folder that is then automatically loaded. This, however, is a pain in the butt and is error-prone. We want to have a single source of truth for our function definitions, which means defining our entire LuaJIT C API in a single header file, which is then loaded directly into LuaJIT. Predictably, we will accomplish this by abusing the C preprocessor:

#ifndef __LUA_API_H__
#define __LUA_API_H__

#ifndef LUA_EXPORTS
#define LUAFUNC(ret, name, ...) ffi.cdef[[ ret lua_##name(__VA_ARGS__); ]]; name = ffi.C.lua_##name
local ffi = require("ffi")
ffi.cdef[[ // Initial struct definitions
#else
#define LUAFUNC(ret, name, ...) ret __declspec(dllexport) lua_##name(__VA_ARGS__)
extern "C" { // Ensure C linkage is being used
#endif

struct GameInfo
{
  uint64_t DashTail;
  uint64_t MaxDash;
};

typedef const char* CSTRING; // VC++ complains about having const char* in macros, so we typedef it here

#ifndef LUA_EXPORTS
]] // End struct definitions
#endif

  LUAFUNC(CSTRING, GetGameName);
  LUAFUNC(CSTRING, IntToString, int);
  LUAFUNC(void, setdeadzone, float);

#ifdef Everglade_EXPORTS
}
#endif

#endif
The key idea here is to use macros such that, when we pass this through the preprocessor without any predefined constants, it will magically turn into a valid Lua script. However, when we compile it in our C++ project, our project defines LUA_EXPORTS, and the result is a valid C header. Our C LUAFUNC is set up so that we’re using C linkage for our structs and functions, and that we’re exporting the function via __declspec(dllexport). This obviously only works for Visual Studio so you’ll want to set up a macro for the GCC version, but I will warn you that VC++ got really cranky when i tried to use a macro for that in my code, so you may end up having to redefine the entire LUAFUNC macro for each compiler.

At this point, we have a bit of a choice to make. It’s more convenient to have the C functions available in the global namespace, which is what this script does, because this simplifies calling them from an interactive console. However, using ffi.C.FunctionName is significantly faster. Technically the fastest way is declaring local C = ffi.C at the top of a file and then calling the functions via C.FunctionName. Luckily, importing the functions into the global namespace does not preclude us from using the “fast” way of calling them, so our script here imports them into the global namespace for ease of use, but in our scripts we can use the C.FunctionName method instead. Thus, when outputting our Lua script, our LUAFUNC macro wraps our function definition in a LuaJIT ffi.cdef block, and then runs a second Lua statement that brings the function into the global namespace. This is why we have an initial ffi.cdef code block for the structs up top, so we can include that second lua statement after each function definition.

Now we need to set up our compilation so that Visual Studio generates this file without any predefined constants and outputs the resulting lua script to our scripts folder, where our other in-game scripts can automatically load it from. We can accomplish this using a Post-Build Event (under Configuration Properties -> Build Events -> Post-Build Event), which then runs the following code:

CL LuaAPI.h /P /EP /u
COPY "LuaAPI.i" "../bin/your/script/folder/LuaAPI.lua" /Y
Visual Studio can sometimes be finicky about that newline, but if you put in two statements on two separate lines, it should run both commands sequentially. You may have to edit the project file directly to convince it to actually do this. The key line here is CL LuaAPI.h /P /EP /u, which tells the compiler to preprocess the file and output it to a *.i file. There is no option to configure the output file, it will always be the exact same file but with a .i extension, so we have to copy and rename it ourselves to our scripts folder using the COPY command.

Loading and Calling Lua Code

We are now set to load all our lua scripts in our script folder via Require, but what if we want an interactive Lua console? There are lua functions that read strings, but to make this simpler, I will provide a function that loads a lua script from an arbitrary std::istream and outputs to an arbitrary std::ostream:

const char* _luaStreamReader(lua_State *L, void *data, size_t *size)
{
  static char buf[CHUNKSIZE];
  reinterpret_cast<std::istream*>(data)->read(buf, CHUNKSIZE);
  *size = reinterpret_cast<std::istream*>(data)->gcount();
  return buf;
}

int Load(lua_State *L, std::istream& s, std::ostream& out)
{
  int r = lua_load(L, &_luaStreamReader, &s, 0);

  if(!r)
  {
    r = lua_pcall(L, 0, LUA_MULTRET, 0);
    if(!r)
      PrintOut(L, out);
  }

  WriteError(L, r, out);
  return r;
}
Of course, the other question is how to call Lua functions from our C++ code directly. There are many, many different implementations of this available, of varying amounts of safety and completeness, but to get you started, here is a very simple implementation in C++ using templates. Note that this does not handle errors - you can change it to use lua_pcall and check the return code, but handling arbitrary Lua errors is nontrivial.
template<class T, int N>
struct LuaStack;

template<class T> // Integers
struct LuaStack<T, 1>
{
  static inline void Push(lua_State *L, T i) { lua_pushinteger(L, static_cast<lua_Integer>(i)); }
  static inline T Pop(lua_State *L) { T r = (T)lua_tointeger(L, -1); lua_pop(L, 1); return r; }
};
template<class T> // Pointers
struct LuaStack<T, 2>
{
  static inline void Push(lua_State *L, T p) { lua_pushlightuserdata(L, (void*)p); }
  static inline T Pop(lua_State *L) { T r = (T)lua_touserdata(L, -1); lua_pop(L, 1); return r; }
};
template<class T> // Floats
struct LuaStack<T, 3>
{
  static inline void Push(lua_State *L, T n) { lua_pushnumber(L, static_cast<lua_Number>(n)); }
  static inline T Pop(lua_State *L) { T r = static_cast<T>(lua_touserdata(L, -1)); lua_pop(L, 1); return r; }
};
template<> // Strings
struct LuaStack<std::string, 0>
{
  static inline void Push(lua_State *L, std::string s) { lua_pushlstring(L, s.c_str(), s.size()); }
  static inline std::string Pop(lua_State *L) { size_t sz; const char* s = lua_tolstring(L, -1, &sz); std::string r(s, sz); lua_pop(L, 1); return r; }
};
template<> // Boolean
struct LuaStack<bool, 1>
{
  static inline void Push(lua_State *L, bool b) { lua_pushboolean(L, b); }
  static inline bool Pop(lua_State *L) { bool r = lua_toboolean(L, -1); lua_pop(L, 1); return r; }
};
template<> // Void return type
struct LuaStack<void, 0> { static inline void Pop(lua_State *L) { } };

template<typename T>
struct LS : std::integral_constant<int, 
  std::is_integral<T>::value + 
  (std::is_pointer<T>::value * 2) + 
  (std::is_floating_point<T>::value * 3)>
{};

template<typename R, int N, typename Arg, typename... Args>
inline R _callLua(const char* function, Arg arg, Args... args)
{
  LuaStack<Arg, LS<Arg>::value>::Push(_l, arg);
  return _callLua<R, N, Args...>(function, args...);
}
template<typename R, int N>
inline R _callLua(const char* function)
{
  lua_call(_l, N, std::is_void<R>::value ? 0 : 1);
  return LuaStack<R, LS<R>::value>::Pop(_l);
}

template<typename R, typename... Args>
inline R CallLua(lua_State *L, const char* function, Args... args)
{
  lua_getglobal(L, function);
  return _callLua<R, sizeof...(Args), Args...>(L, function, args...);
}
Now you have everything you need for an extensible Lua scripting implementation for your game engine, and even an interactive Lua console, all using LuaJIT. Good Luck!


Discord: Rise Of The Bot Wars


The most surreal experience I ever had on discord was when someone PMed me to complain that my anti-spam bot wasn’t working against a 200+ bot raid. I pointed out that it was never designed for large-scale attacks, and that discord’s own rate-limiting would likely make it useless. He revealed he was selling spambot accounts at a rate of about $1 for 100 unique accounts and that he was being attacked by a rival spammer. My anti-spam bot had been dragged into a turf war between two spambot networks. We discussed possible mitigation strategies for worst-case scenarios, but agreed that most of them would involve false-positives and that discord showed no interest in fixing how exploitable their API was. I hoped that I would never have to implement such extreme measures into my bot.

Yesterday, our server was attacked by over 40 spambots, and after discord’s astonishingly useless “customer service” response, I was forced to do exactly that.

A Brief History of Discord Bots

Discord is built on a REST API, which was reverse engineered by late 2015 and used to make unofficial bots. To test out their bots, they would hunt for servers to “raid”, invite their bots to the server, then spam so many messages it would softlock the client, because discord still didn’t have any rate limiting. Naturally, as the designated punching bags of the internet, furries/bronies/Twilight fans/slash fiction writers/etc. were among the first targets. The attack on our server was so severe it took us almost 5 minutes of wrestling with an unresponsive client to ban them. Ironically, a few of the more popular bots today, such as “BooBot”, are banned as a result of that attack, because the first thing the bot creator did was use it to raid our server.

I immediately went to work building an anti-spam bot that muted anyone sending more than 4 messages per second. Building a program in a hostile environment like this is much different from writing a desktop app or a game, because the bot had to be bulletproof - it had to rate-limit itself and could not be allowed to crash, ever. Any bug that allowed a user to crash the bot was treated as P0, because it could be used by an attacker to cripple the server. Despite using a very simplistic spam detection algorithm, this turned out to be highly effective. Of course, back then, discord didn’t have rate limiting, or verification, or role hierarchies, or searching chat logs, or even a way to look up where your last ping was, so most spammers were probably not accustomed to having to deal with any kind of anti-spam system.

I added raid detection, autosilence, an isolation channel, and join alerts, but eventually we were targeted by a group from 4chan’s /pol/ board. Because this was a sustained attack, they began crafting spam attacks timed just below the anti-spam threshold. This forced me to implement a much more sophisticated anti-spam system, using a heat algorithm with a linear decay rate, which is still in use today. This improved anti-spam system eventually made the /pol/ group give up entirely. I’m honestly amazed the simplistic “X messages in Y seconds” approach worked as long as it did.

Of course, none of this can defend against a large scale attack. As I learned by my chance encounter with an actual spammer, it was getting easier and easier to amass an army of spambots to assault a channel instead of just using one or two.

Anatomy Of A Modern Spambot Attack

At peak times (usually during summer break), our server gets raided 1-2 times per day. These minor raids are often just 2-3 tweens who either attempt to troll the chat, or use a basic user script to spam an offensive message. Roughly 60-70% of these raids are either painfully obvious or immediately trigger the anti-spam bot. About 20% of the raids involve slightly intelligent attempts to troll the chat by being annoying without breaking the rules, which usually take about 5-10 minutes to be “exposed”. About 5-10% of the raids are large, involving 8 or more people, but they are also very obvious and can be easily confined to an isolation channel. Problems arise, however, with large spambot raids. Below is a timeline of the recent spambot attack on our server:

messages
19:41:25
19:41:45
19:42:05
19:42:25
19:42:45

This was a botched raid, but the bots that actually worked started spamming within 5 seconds of joining, giving the moderators a very narrow window to respond. The real problem, however, is that so many of them joined, the bot’s API calls to add a role to silence them were rate-limited. They also sent messages once every 0.9 seconds, which is designed to get around Discord’s rate limiting. This amounted to 33 messages sent every second, but it was difficult for the anti-spam to detect. Had the spambots reduced their spam cadence to 3 seconds or more, this attack could have bypassed the anti-spam detection entirely. My bot now instigates a lockdown by raising the verification level when a raid is detected, but it simply can’t silence users fast enough to deal with hundreds of spambots, so at some point the moderators must use a mass ban function. Of course, banning is restricted by the global rate limit, because Discord has no mass ban API endpoint, but luckily the global rate limit is something like 50 requests per second, so if you’re only banning people, you’re probably okay.

However, a hostile attacker could sneak bots in one-by-one every 10 minutes or so, avoiding setting off the raid alarm, and then activate them all at once. 500 bots sending randomized messages chosen from an English dictionary once every 5 seconds after sneaking them in over a 48 hour period is the ultimate attack, and one that is almost impossible to defend against, because this also bypasses the 10-minute verification level. As a weapon of last resort, I added a command that immediately bans all users that sent their first message within the past two minutes, but, again, banning is subject to the global rate limit! In fact, the rate limits can change at any time, and while message deletion has a higher rate limit for bots, bans don’t.

The only other option is to disable the @everyone role from being able to speak on any channel, but you have to do this on a per channel basis, because Discord ignores you if you attempt to globally disable sending message permissions for @everyone. Even then, creating an “approved” role doesn’t work because any automated assignment could be defeated by adding bots one by one. The only defense a small Discord server has is to require moderator approval for every single new user, which isn’t a solution - you’ve just given up having a public Discord server. It’s only a matter of time until any angry 13-year-old can buy a sophisticated attack with a week’s allowance. What will happen to public Discord servers then? Do we simply throw up our hands and admit that humanity is so awful we can’t even have public communities anymore?

The Discord API Hates You

The rate-limits imposed on Discord API endpoints are exacerbated by temporary failures, and that’s excluding network issues. Thus, if I attempt to set a silence role on a spammer that just joined, the API will repeatedly claim they do not exist. In fact, 3 separate API endpoints consistently fail to operate properly during a raid: A “member joined” event won’t show up for several seconds, but if I fall back to calling GetMember(), this also claims the member doesn’t exist, which means adding the role also fails! So I have to attempt to silence the user with every message they send until Discord actually adds the role, even though the API failures are also counted against the rate limit! This gets completely absurd once someone assaults your server with 1000 spambots, because this triggers all sorts of bottlenecks that normally aren’t a problem. The alert telling you a user has joined? Rate limited. It’ll take your bot 5-10 minutes to get through just telling you such a gigantic spambot army joined, unless you include code specifically designed to detect these situations and reduce the number of alerts. Because of this, a single user can trigger something like 5-6 API requests, all of which are counted against your global rate limit and can severely cripple a bot.

The general advice that is usually given here is “just ban them”, which is terrible advice because Discord’s own awful message handling makes it incredibly easy to trigger a false positive. If a message fails to send, the client simply sends a completely new message, with it’s own ID, and will continue re-sending the message until an Ack is received, at which point the user has probably send 3 or 4 copies of the same message, each of which have the same content, but completely unique IDs and timestamps, which looks completely identical to a spam attack.

Technically speaking, this is done because Discord assigns snowflake IDs server-side, so each message attempt sent by the client must have a unique snowflake assigned after it is sent. However, it can also be trivially fixed by adding an optional “client ID” field to the message, with a client-generated ID that stays the same if the message is resent due to a network failure. That way, the server (or the other clients) can simply drop any duplicate messages with identical client IDs while still ensuring all messages have unique IDs across their distributed cluster. This would single-handedly fix all duplicate messages across the entire platform, and eliminate almost every single false-positive I’ve seen in my anti-spam bot.

Discord Doesn’t Care

Sadly, Discord doesn’t seem to care. The general advice in response to “how do I defend against a large scale spam attack” is “just report them to us”, so we did exactly that, and then got what has to be one of the dumbest customer service e-mails I’ve ever seen in my life:

Discord Being Stupid

Excuse me, WHAT?! Sorry about somebody spamming your service with horrifying gore images, but please don’t delete them! What happens if the spammers just delete the messages themselves? What happens if they send child porn? “Sorry guys, please ignore the images that are literally illegal to even look at, but we can’t delete them because Discord is fucking stupid.” Does Discord understand the concept of marking messages for deletion so they are viewable for a short time as evidence for law enforcement?! My anti-spam bot’s database currently has more information than Discord’s own servers! If this had involved child porn, the FBI would have had to ask me for my records because Discord would have deleted them all!

Obviously, we’re not going to leave 500+ gore messages sitting in the chatroom while Discord’s ass-backwards abuse team analyzes them. I just have to hope my own nuclear option can ban them quickly enough, or simply give up the entire concept of having a public Discord server.

The problem is that the armies of spambots that were once reserved for the big servers are now so easy and so trivial to make that they’re beginning to target smaller servers, servers that don’t have the resources or the means to deal with that kind of large scale DDoS attack. So instead, I have to fight the growing swarm alone, armed with only a crippled, rate-limited bot of my own, and hope the dragons flying overhead don’t notice.

What the fuck, Discord.


Windows Won't Let My Program Crash


It’s been known for a while that windows has a bad habit of eating your exceptions if you’re inside a WinProc callback function. This behavior can cause all sorts of mayhem, like your program just vanishing into thin air without any error messages due to a stack overflow that terminated the program without actually throwing an exception. What I didn’t realize is that it also eats assert(), which makes debugging hell, because the assertion would throw, the entire user callback would immediately terminate without any stack unwinding, and then windows would just… keep going, even though the program is now in a laughably corrupt state, because only half the function executed.

While trying to find a way to fix this, I discovered that there are no less than 4 different ways windows can choose to eat exceptions from your program. I had already told the kernel to stop eating my exceptions using the following code:

HMODULE kernel32 = LoadLibraryA("kernel32.dll");   
assert(kernel32 != 0);   
tGetPolicy pGetPolicy = (tGetPolicy)GetProcAddress(kernel32, "GetProcessUserModeExceptionPolicy");  
tSetPolicy pSetPolicy = (tSetPolicy)GetProcAddress(kernel32, "SetProcessUserModeExceptionPolicy");   
if(pGetPolicy && pSetPolicy && pGetPolicy(&dwFlags))  
  pSetPolicy(dwFlags & \~EXCEPTION_SWALLOWING); // Turn off the filter
However, despite this, COM itself was wrapping an entire try {} catch {} statement around my program, so I had to figure out how to turn that off, too. Apparently some genius at Microsoft decided the default behavior should be to just swallow exceptions whenever they were making COM, and now they can’t change this default behavior because it’d break all the applications that now depend on COM eating their exceptions to run properly! So, I turned that off with this code:
CoInitialize(NULL); // do this first   
if(SUCCEEDED(CoInitializeSecurity(NULL, -1, NULL, NULL, RPC_C_AUTHN_L_VEL_PKT_PRIVACY, RPC_C_IMP_LEV_L_IMPERSONATE, NULL, EOAC_DYNAMIC_CLOAKING, NULL)))
{
  IGlobalOptions *pGlobalOptions;
  hr = CoCreateInstance(CLSID_GlobalOptions, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&pGlobalOptions));
  if(SUCCEEDED(hr))
  {
    hr = pGlobalOptions->Set(COMGLB_EXCEPTION_HANDLING, COMGLB_EXCEPTION_DONOT_HANDLE);
    pGlobalOptions->Release();
  }
}
There are two additional functions that could be swallowing exceptions in your program: _CrtSetReportHook2 and SetUnhandledExceptionFilter, but both of these are for SEH or C++ exceptions, and I was throwing an assertion, not an exception. I was actually able to verify, by replacing the assertion #define with my own version, that throwing an actual C++ exception did crash the program… but an assertion didn’t. Specifically, an assertion calls abort(), which raises SIGABRT, which crashes any normal program. However, it turns out that Windows was eating the abort signal, along with every other signal I attempted to raise, which is a problem, because half the library is written in C, and C obviously can’t raise C++ exceptions. The assertion failure even showed up in the output… but didn’t crash the program!
Assertion failed!

Program: ...udio 2015\\Projects\\feathergui\\bin\\fgDirect2D_d.dll
File: fgEffectBase.cpp
Line: 20

Expression: sizeof(_constants) == sizeof(float)_(4_4 + 2)
No matter what I do, Windows refuses to let the assertion failure crash the program, or even trigger a breakpoint in the debugger. In fact, calling the __debugbreak() intrinsic, which outputs an int 3 CPU instruction, was completely ignored, as if it simply didn’t exist. The only reliable way to actually crash the program without using C++ exceptions was to do something like divide by 0, or attempt to write to a null pointer, which triggers a segfault.

Any good developer should be using assertions to verify their assumptions, so having assertions silently fail and then corrupt the program is even worse than ignoring they exist! Now you could have an assertion in your code that’s firing, terminating that callback, leaving your program in a broken state, and then the next message that’s processed blows up for strange and bizarre reasons that make no sense because they’re impossible.

I have a hard enough time getting my programs to work, I didn’t think it’d be this hard to make them crash.


DirectX Is Terrifying


About two months ago, I got a new laptop and proceeded to load all my projects on it. Despite compiling everything fine, my graphics engine that used DirectX mysteriously crashed upon running. I immediately suspected either a configuration issue or a driver issue, but this seemed weird because my laptop had a newer graphics card than my desktop. Why was it crashing on newer hardware? Things got even more bizarre once I narrowed down the issue - it was in my shader assignment code, which hadn’t been touched in almost 2 years. While I initially suspected a shader compilation issue, there was no such error in the logs. All the shaders compiled fine, and then… didn’t work.

Now, if this error had also been happening on my desktop, I would have immediately started digging through my constant assignments, followed by the vertex buffers assigned to the shader, but again, all of this ran perfectly fine on my desktop. I was completely baffled as to why things weren’t working properly. I had eliminated all possible errors I could think of that would have resulted from moving the project from my desktop to my laptop: none of the media files were missing, all the shaders compiled, all the relative paths were correct, I was using the exact same compiler as before with all the appropriate updates. I even updated drivers on both computers, but it stubbornly refused to work on the laptop while running fine on the desktop.

Then I found something that nearly made me shit my pants.

if(profile <= VERTEX_SHADER_5_0 && _lastVS != shader) {
  //...
} else if(profile <= PIXEL_SHADER_5_0 && _lastPS != shader) { 
  //...
} else if(profile <= GEOMETRY_SHADER_5_0 && _lastGS != shader) {
  //...
}
Like any sane graphics engine, I do some very simple caching by keeping track of the last shader I assigned and only setting the shader if it had actually changed. These if statements, however, have a very stupid but subtle bug that took me quite a while to catch. They’re a standard range exclusion chain that figures out what type of shader a given shader version is. If it’s less than say, 5, it’s a vertex shader. Otherwise, if it’s less than 10, that this means it’s in the range 5-10 and is a pixel shader. Otherwise, if it’s less than 15, it must be in the range 10-15, ad infinitum. The idea is that you don’t need to check if the value is greater than 5 because the failure of the previous statement already implies that. However, adding that cache check on the end breaks all of this, because now you could be in the range 0-5, but the cache check could fail, throwing you down to the next statement checking to see if you’re below 10. Because you’re in the range 0-5, you’re of course below 10, and the cache check will ALWAYS succeed, because no vertex shader would ever be in the pixel shader cache! All my vertex shaders were being sent in to directX as pixel shaders after their initial assignment!

For almost 2 years, I had been feeding DirectX total bullshit, and had even tested it on multiple other computers, and it had never given me a single warning, error, crash, or any indication whatsoever that my code was completely fucking broken, in either debug mode or release mode. Somehow, deep in the depths of nVidia’s insane DirectX driver, it had managed to detect that I had just tried to assign a vertex shader to a pixel shader, and either ignored it completely, or silently fixed my catastrophic fuckup. However, my laptop had the mobile drivers, which for some reason did not include this failsafe, and so it actually crashed like it was supposed to.

While this was an incredibly stupid bug that I must have written while sleep deprived or drunk (which is impressive, because I don’t actually drink), it was simply impossible for me to catch because it produced zero errors or warnings. As a result, this bug has the honor of being both the dumbest and the longest-living bug of mine, ever. I’ve checked every location I know of for any indication that anything was wrong, including hooking into the debug log of directX and dumping all it’s messages. Nothing. Nada. Zilch. Zero.

I’ve heard stories about the insane bullshit nVidia’s drivers do, but this is fucking terrifying.

Alas, there is more. I had been experimenting with direct2D as an alternative because, well, it’s a 2D engine, right? After getting text rendering working, a change in string caching suddenly broke the entire program. It broke in a particularly bizarre way, because it seemed to just stop rendering halfway through the scene. It took almost an hour of debugging for me to finally confirm that the moment I was rendering a particular text string, the direct2D driver just stopped. No errors were thrown. No warnings could be found. Direct2D’s failure state was apparently to simply make every single function call silently fail with no indication that it was failing in the first place. It didn’t even tell me that the device was missing or that I needed to recreate it. The text render call was made and then every single subsequent call was ignored and the backbuffer was forever frozen to that half-drawn frame.

The error itself didn’t seem to make any more sense, either. I was passing a perfectly valid string to Direct2D, but because that string originated in a different DLL, it apparently made Direct2D completely shit itself. Copying the string onto the stack, however, worked (which itself could only work if the original string was valid).

The cherry on top of all this is when I discovered that Direct2D’s matrix rotation constructor takes degrees, not radians, like every single other mathematical function in the standard library. Even DirectX takes radians!

WHO WRITES THESE APIs?!


Everyone Does sRGB Wrong Because Everyone Else Does sRGB Wrong


Did you know that CSS3 does all its linear gradients and color interpolation completely wrong? All color values in CSS3 are in the sRGB color space, because that’s the color space that gets displayed on our monitor. However, the problem is that the sRGB color space looks like this:

sRGB gamma curve

Trying to do a linear interpolation along a nonlinear curve doesn’t work very well. Instead, you’re supposed to linearize your color values, transforming the sRGB curve to the linear RGB curve before doing your operation, and then transforming it back into the sRGB curve. This is gamma correction. Here are comparisons between gradients, transitions, alpha blending, and image resizing done directly in sRGB space (assuming your browser complies with the W3C spec) versus in linear RGB:

sRGBLinear
At this point you’ve probably seen a bazillion posts about how you’re doing color interpolation wrong, or gradients wrong, or alpha blending wrong, and the reason is because… you’re doing it wrong. But one can hardly blame you, because everyone is doing it wrong. CSS does it wrong because SVG does it wrong because Photoshop does it wrong because everyone else does it wrong. It’s all wrong.

The amazing thing here is that the W3C is entirely aware of how wrong CSS3 linear gradients are, but did it anyway to be consistent with everything else that does them wrong. It’s interesting that while SVG is wrong by default, it does provide a way to fix this, via color-interpolation. Of course, CSS doesn’t have this property yet, so literally all gradients and transitions on the web are wrong because there is no other choice. Even if CSS provided a way to fix this, it would still have to default to being wrong.

It seems we have reached a point where, after years of doing sRGB interpolation incorrectly, we continue to do it wrong not because we don’t know better, but because everyone else is doing it wrong. So everyone’s doing it wrong because everyone else is doing it wrong. A single bad choice done long ago has locked us into compatibility hell. We got it wrong the first time so now we have to keep getting it wrong because everyone expects the wrong result.

It doesn’t help that we don’t always necessarily want the correct interpolation. The reason direct interpolation in sRGB is wrong is because it changes the perceived luminosity. Notice that when going from red to green, the sRGB gradient gets darker in the middle of the transition, while the gamma-corrected one has constant perceived luminosity. However, an artist may prefer the sRGB curve to the linearized one because it puts more emphasis on red and green. This problem gets worse when artists use toolchains inside sRGB and unknowingly compensate for any gamma errors such that the result is roughly what one would expect. This is a particular issue in games, because games really do need gamma-correct lighting pipelines, but the GUIs were often built using incorrect sRGB interpolation, so doing gamma-correct alpha blending gives you the wrong result because the alpha values were already chosen to compensate for incorrect blending.

In short, this is quite a mess we’ve gotten ourselves into, but I think the most important thing we can do is give people the option of having a gamma correct pipeline. It is not difficult to selectively blend things with proper gamma correction. We need to have things like SVG’s color-interpolation property in CSS, and other software needs to provide optional gamma correct pipelines (I’m looking at you, photoshop).

Maybe, eventually, we can dig ourselves out of this sRGB hell we’ve gotten ourselves into.


Avatar

Archive

  1. 2024
  2. 2023
  3. 2022
  4. 2021
  5. 2020
  6. 2019
  7. 2018
  8. 2017
  9. 2016
  10. 2015
  11. 2014
  12. 2013
  13. 2012
  14. 2011
  15. 2010
  16. 2009