Erik McClure

Why Windows 8 Does The Right Thing The Wrong Way


Yesterday, I saw a superb presentation called “When The Consoles Die, What Comes Next?” by Ben Cousins. It demonstrates that mobile gaming is behaving as a disruptive technology, and is causing the same market decline in consoles that consoles themselves did to arcades in the 1990s. He also demonstrates how TV crushed cinema in a similar manner - we just don’t think of it like that because we don’t remember back when almost 60% of the population was going to the movie theaters on a weekly basis. Today, most people tend to go to the movie theater as a special occasion, so the theaters didn’t completely die out, they just lost their market dominance. The role the movie theater played changed as new technology was introduced.

The game industry, and in fact the software industry as a whole, is in a similar situation. Due to the mass adoption of iPads and other tablets, we now have a mobile computing experience that is distinct from that of say, a console, or even a PC. Consequently, the role of consoles and PCs will shift in response to this new technology. However, while many people are eager to jump on the bandwagon (and it’s a very lucrative bandwagon), we are already losing sight of what will happen to stabilize the market.

People who want to sound futuristic and smart are talking about the “Post-PC Era”, which is a very inaccurate thing to say. PCs are clearly very useful for some tasks, and its unlikely that they will be entirely replaced by mobile computing, especially when screen real-estate is so important to development and productivity, and the difficulty of replicating an ergonomic keyboard. The underlying concept of a PC, in that you sit down at it, have a keyboard and mouse and a large screen to work at, is unlikely to change significantly. The mouse will probably be replaced by adaptive touch solutions and possibly gestures, and the screen might very well turn into a giant glass slab with OLEDs on it, or perhaps simply exist as the wall, but the underlying idea is not going anywhere. It will simply evolve.

Windows 8 is both a surprisingly prescient move on part of Microsoft, and also (not surprisingly) a horrible train wreck of an OS. The key concept that Microsoft correctly anticipated was the unification of operating systems. It is foolish to think that we will continue on with this brick wall separating tablet devices and PCs. The difference between tablets and PCs is simply one of both user interface and user experience. These are both managed by the highest layers of complexity in an operating system, such that it can simply adapt its presentation to suit whatever device it is currently on. It will have to once we introduce monitors the size of walls and OLED cards with embedded microchips. There will be such a ridiculous number of possible presentation mediums, that the idea of a presentation medium must be generalized such that a single operating system can operate on a stupendous range of devices.

This has important consequences for the future of software. Currently we seem to think that there should be “tablet versions” of software. This is silly and inconvenient. If you buy a piece of software, it should just work, no matter what you put it on. If it finds itself on a PC, it will analyze the screen size and behave appropriately. If its on a tablet, it will enable touch controls and reorganize the UI appropriately. More importantly, you shouldn’t have to buy a version for each of your devices, because eventually there won’t be anything other than a computer we carry around with us that plugs into terminals or interacts with small central servers at a company.

If someone buys a game I make, they own a copy of that game. That means they need to be able to get a copy of that game on all their devices without having to buy it 2 or 3 times. The act of buying the game should make it available to install on any interactive medium they want, and my game should simply adapt itself to whatever medium is being used to play it. The difference between PC and tablet will become blurred as they are reduced to simply being different modes of interaction, with the same underlying functionality.

This is what Microsoft is attempting to anticipate, by building an operating system that can work on both a normal computer and a tablet. They even introduce a Windows App Store, which is a crucial step towards allowing you to buy a program for both your PC and your tablet in a single purchase. Unfortunately, the train-wreck analogy is all too appropriate for describing the state of Windows 8. Rather than presenting an elegant, unified tablet and PC experience, they smash together two completely incompatible interfaces in an incoherent disaster. You are either presented with a metro interface, or a traditional desktop interface, with no in-between. The transition is about as smooth as your head smashing against a brick wall. They don’t even properly account for the fact that their new metro start menu is terribly optimized for a mouse, but try to make you use it anyway. It does the right thing, the wrong way.

The game industry has yet to catch on to this, since one designs either a “PC game” or a “mobile game”. When a game is released on a tablet, it’s a special “mobile version”. FL Studio has a special mobile version. There is no unification anywhere, and the two are treated as separate walled gardens. While this is currently an advantage during a time where tablets don’t have the kind of power a PC does, it will quickly become a disadvantage. The convenience of having familiar interfaces on all your devices, with all of the same programs, will trump isolated functionality. There will always be games and programs more suited to consoles, or to PCs, or to tablets, but unless we stop thinking of these as separate devices, and instead one of many possible user experiences that we must adapt our creations to, we will find ourselves on the wrong side of history.


Visual Studio Broke My Computer


So I’d been using the developer preview of VS11 and liked some of its improvements. When the desaturated VS11 beta came out, I hated the color scheme but decided I still wanted the upgraded components, so I went to install VS11 beta. Unfortunately the beta only lets you change its install location if the preview developer preview isn’t installed, and the developer preview had installed itself into C:\ without ever letting me change the path, which was annoying. So I took the opportunity to fix things and uninstalled the developer preview, then installed the beta of VS11.

Everything was fine and dandy until I discovered that VS11 wasn’t compiling C++ DLLs that worked on XP. I don’t know how it managed to do this, since the DLL had no dependencies whatsoever, and that bug was only supposed to affect MFC and other windows related components and hence there was no windows flag for me to specify which version I wanted, but just to be sure I decided to try and compile it in VS2010. It was at this point I discovered that VS2010 could no longer open any projects at all. It was broken. Further investigation revealed that uninstalling VS11 developer preview will break VS2010. Now, I had an ultimate version of VS2010 I’ve had sitting around for a while I got from Dreamspark, so I figured I could just uninstall VS2010 and then reinstall the ultimate version and that would kill any remaining problems the pesky 2011 beta introduced.

The thing is, I can’t uninstall the SP1 update from VS2010. Not before I uninstalled VS2010, not after I uninstalled it, not even after I installed the ultimate version. It just gave me this:

*The removal of Microsoft Visual Studio 2010 Service Pack 1 may put this computer in an state in which projects cannot be loaded and Service Pack 1 cannot be reinstalled. For instructions about how to correct the problem, see the readme on the Microsoft Download Center website.*

So I just had to leave the Service Pack alone and attempted to re-apply it after installing VS2010 Ultimate, but the online installer failed. So then I downloaded the SP1 iso file and installed that. It failed too, but this time I could fix the problem - someone had forgotten to copy the F#_redist MSI file to the TEMP directory, instead only copying the CAB file. Note that I don’t even have F# installed.

I was able to resolve that problem and finished installing the service pack, but to no avail. Both the VS2010 installation and the service pack had forgotten to install the C++ standard library headers, which, as you can imagine, are kind of important. I searched around for a solution, but the only guy who had the same problem as me had simply reformatted and reinstalled windows (note the moderator’s excellent grasp of english grammar). The only thing I had to go off of was using a special utility they built to uninstall all traces of VS2010 from your computer. Unfortunately, the utility doesn’t actually succeed in uninstalling everything, and also doesn’t uninstall SP1, so you have to uninstall SP1 first before running the utility. The problem is, I can’t uninstall SP1 or I’ll never be able to install it again.

At this point it appears I am pretty much fucked. How does Microsoft consider this an acceptable scenario? I worked as an intern at Microsoft once, I know they use their own development tools. I used tools that hadn’t even been released yet. There was one guy on our team whose entire job was just the setup. And yet, through a series of astonishingly bad failures, any one of which being fixed would have prevented this scenario, my computer is now apparently fucked, and I’m going to have to upgrade my windows installation to 64 bit a lot sooner than I wanted.

EDIT: After using the uninstall tool to do a full uninstall and uninstalling SP1 and manually finding any VC10 related registry entries in the registry and deleting them, then reinstalling everything from scratch, I solved the header file problem (but had to reinstall SP1 or it wouldn’t let me open my project files). However then the broken VCTargetsPath problem showed up again, which a repair didn’t fix. I finally fixed the issue by finding someone else with a working installation of VC10, having them export their MSBuild registry key and manually merging it into my registry. If you have this problem, I’ve uploaded the registry key (which should be the same for any system, XP or 7) here. If you have a 64-bit machine, you may need to copy its values into the corresponding WoW64 nodes (just search for a second instance of MSBuild in your registry).


Implicit UI Design


For a long time, I have been frustrated with the poor UI design that is rampant in the software industry. As a consequence, many Linux enthusiasts have pointed out how productive you can be with Emacs, VIM, and other keyboard-shortcut/terminal oriented software. The UI design has gotten so bad, I have to agree that in comparison to recent user interface designs, keyboard shortcuts are looking rather appealing. This, however, doesn’t mean that one approach is inherently better than another, simply that modern user interfaces suck.

In this blog, I’m going to outline several improvements to user interfaces and the generalized alternative design philosophy that is behind many of them. To start, let’s look at this recent article about Visual Studio 11, which details Microsoft’s latest strategy to ruin their products by sucking all the color out of them:

Visual Studio 2010Visual Studio 2010
Visual Studio 11Visual Studio 11

See, color is kind of important. Notably, color is how I find icons in your toolbar. To me, color is a kind of filter. If I know what color(s) are most prominent in a given icon, I can mentally filter out everything else and only have to browse through a much smaller subset of your icons. Combined with having a vague notion of where a given icon is, this usually reduces the number of icons I have to mentally sift through to only one or two.

Color FilteringMental color and spatial filtering

If you remove color, I am left to only my spatial elimination, which can make things extremely annoying when there are a bunch of icons right next to each other. Microsoft claims to have done a study that shows that this new style does not harm being able to identify a given icon in terms of speed, but fails to take into account the mental tax that goes into finding an icon without color.

While we understand that opinions on this new style of iconography may vary, an icon recognition study conducted with 76 participants, 40 existing and 36 new VS users, showed no negative effect in icon recognition rates for either group due to the glyph style transition. In the case of the new VS users they were able to successfully identify the new VS 11 icons faster than the VS 2010 icons (i.e., given a command, users were faster at pointing to the icon representing that command).
You may still be able to find the icon, especially after you’ve been forced to memorize its position just to find it again, and do it reasonably fast, but whether you like it or not, the absence of color will force your brain to process more possible icons before settling down on the one you actually want. When I’m compiling something, I only have a vague notion of where the start button actually is. I don’t need to know exactly where it is or even what it looks like; it’s extremely easy to find since its practically the only green button on the entire toolbar. Same goes for save and open. The uniform color makes the icons very easy to spot, and all the other icons are immediately discarded.

That said, there are many poorly design icons in Visual Studio. Take this group of icons: Bad Icons!

I don’t really know what any of those buttons do. One of them is some sort of toolbox and one is probably the properties window, but they have too many colors. It becomes, again, mentally taxing when even looking at those icons because there is too much irrelevant information being processed. In this scenario, the colors are detrimental to the identification of the buttons. This is because there is no single dominant color, like on the play button, or the save button. Hence, the best solution is to design icons with principle colors, such that some or all the icon is one color and the rest is greyscale. The brain edits out the greyscale and allows identification by color, followed by location, followed by actual glyph shape. To avoid overloading the user with a rainbow of colors, consider scaling the amount of color to how likely a given button is to be used. Desaturate or shrink the colored area of buttons that are much less important. Color is a tool that can be used correctly or incorrectly - that doesn’t mean you should just throw it away the first time you screw up.

We can make better decisions by taking into account how the user is going to use the GUI. By developing an awareness of how a user interface is normally used, we develop vastly superior interactions that accelerate, rather than impede, workflow. To return to visual studio, let’s take a look at a feature in the VS11: pinning a variable preview. Variable PreviewPinned Variable Preview

This is a terrible implementation for a number of reasons. First, since the pin button is all the way on the other end, it is a moving target and you’ll never really be sure where it is until you need to pin something. Furthermore, you can drag the pinned variable around, and you’ll want to after Visual Studio moves it to a seemingly random location that is almost always annoying (but only after the entire IDE locks up for 3 seconds because you haven’t done it recently). When would a user be dragging a variable around? Only when its pinned. A better implementation is to make a handle on the left side of any variable preview. If you click the handle (and optionally drag the variable around), it is implicitly converted to a pinned variable without changing anything else, and a close button appears to the left of the handle.

Better Variable PreviewBetter Pinned Preview

This is much easier to use, because it eliminates a mouse click and prevents the variable from moving to some random location you must then locate afterwards to move it to your actual desired location. By shifting the close button to the left, it is no longer a moving target. To make this even better, you should make sure previews snap to each other so you can quickly build a list of them, and probably include a menu dropdown by the close button too.

We have just used Implicit UI Design, where instead of forcing the user to explicitly specify what they want to have happen, we can use contextual clues to imply a given action. We knew that the user could not possibly move a variable preview without wanting to pin it, so we simply made the act of moving the preview, pin it. Another example is docking windows. Both Adobe and Visual Studio are guilty of trying to make everything dockable everywhere without realizing that this is usually just extremely annoying, not helpful. I mean really, why would I want to dock the find window?

Bad Find Window!
Goddamn it, not again

Even if I was doing a lot of find and replace, it just isn’t useful. You can usually cover up half your code while finding and replacing without too much hassle. The only thing this does is make it really hard to move the damn window anywhere because if you aren’t careful you’ll accidentally dock it to the toolbar and then you have to pull the damn thing out and hope nothing else blew up, and if your really unlucky you’ll have to reconstruct the whole freaking solution window.

That isn’t helpful. The fact that the act of docking and undocking can be excruciatingly slow makes things even worse and is inexcusable. Only a UI that is bloated beyond my ability to comprehend could possibly have such a difficult time docking and undocking things, no doubt made worse by their fetishization of docking windows. Docking windows correctly requires that you account for the extremely common mistake of accidentally undocking or docking a window where you didn’t want it. If a mistake is so common and so annoying, you should make it either much harder to do, or make it very easy to undo. In this case, you should remember where the window was docked last (or not docked), and make a space for it to be dropped into, instead of forcing the user to figure out which magic location on the screen actually docks the window to the right position (which sometimes involves differences of 2 or 3 pixels, which is incredibly absurd).

Spot Holding
Spot Holding

Panels are related to docking (usually you can dock something into a collapsible panel), but even these aren’t done very efficiently. If you somehow manage to get the window you want docked into a panel, it defaults to pinning the entire group of panels, regardless of whether they were pinned before or not. I just wanted to pin one!

Pin Disaster
You have to drag the thing out and redock it to pin just that one panel.

There is a better way to do this. If we click on a collapsible panel, we know the user wants to show it. However, the only reason they even need to click on the panel is because we don’t show it immediately after the mouse hovers over the button. This time should be less than a tenth of a second, and it should immediately close if the mouse gets too far away. It should stay open if the mouse is close enough that the user might have temporarily left the window but may want to come back in. Hovering over another panel button immediately replaces the current panel (in this case, at least), and dragging the panel title bar or the panel button lets you dock or undock.

Now the user will never need to click the panel to make it show up, so we can make that operation do something else. Why not make clicking a panel pin it open? And don’t do any of that “pin the entire bunch of panels” crap either, just pin that one panel and have it so the other panels can still pop up over it. Then, if you click the panel button again, it’s unpinned. This is so much better than the clunky UI interfaces we have right now, and we did it by thinking about Implicit UI Design. By making the mouse click redundant, we could take that to imply that the user wants the panel to be pinned. Moving the mouse far away from the panel implies that the panel is no longer useful. To make sure a mistake is easy to correct, pinning a panel should be identical to simply having it be hovered over indefinitely, and should not change the surrounding UI in any way. Then a mistake can simply be undone by clicking the panel button again, which is a much larger target than a tiny little pin icon. Combine this with our improved docking above, so that a mistakenly undocked panel, when clicked and dragged again, has its old spot ready and waiting in case you want to undo your mistake.

Panel holding

It’s 2012. I think its high time our user interfaces reflected that.


Linux Mint 12 KDE


Over the course of 3 hours spent trying to figure out why my Linux Mint 12 KDE installation would to go to a permanent black screen on boot, I managed to overheat part of my computer (at least that is the only thing that could explain this) to the point where it’d lock up on the POST and had to give up until this morning, where I managed to figure out that I could delete the xorg.conf file in Mint to force it to go to default settings. This finally got Mint 12 to show up, and I later confirmed that the nvidia drivers were broken. I then discovered that the nvidia drivers in the distribution for apt-get sources are almost 120 versions behind the current release (295), but the installer for that kept failing despite my attempts to fix it and having to add source repositories to apt-get because apparently these are disabled by default, which confused me greatly for a short period whilst trying to install the Linux source. This ultimately proved futile since Nvidia can’t be bothered to make anything remotely easy for Linux, so I’m stuck with a half-broken default driver that can’t use my second monitor with a mouse cursor that repeatedly blinks out of existence in a very aggravating manner.

Of course, the only reason I even have Linux installed is to compile things on Linux and make sure they work, so as long as my development IDE works, I should be fine! Naturally, it doesn’t. Kdevelop4 is the least insultingly bad coding IDE available for Linux that isn’t VIM or Emacs, which both follow in the tradition of being so amazingly configurable you’ll spent half your development cycle trying to get them to work work and then learning all the arcane commands they use in order to have some semblance of productivity. Like most bizarre, functionality-oriented Linux tools, after about 6 months of torturing yourself with them, you’ll probably be significantly more productive than someone on Visual Studio. Sadly, the majority of my time is not spent coding, it’s spent sitting in front of a computer screen for hours trying to figure out how to get 50 lines of code to work properly. Saying that your text editor lets you be super productive assumes you are typing something all the time, and if you are doing that you are a codemonkey, not a programmer. Hence, I really don’t give a fuck about how efficient a given text editor is so long as it has a couple commands I find useful in it. What’s more important is that it works. KDevelop4 looked like it might actually do this, but sadly it can’t find any include files. It also can’t compile anything that isn’t C++ because it builds everything with CMake and refuses to properly compile a C file. It has a bunch of hilariously bad user interface design choices, and basically just sucks.

So now i’m back to the command line, editing my code in Kate, the default text editor for Mint 12, forced to compile my code with GCC from the terminal. This, of course, only works when I have a single source file. Now I need to learn how to write makefiles, which are notoriously confusing and ugly and require me to do all sorts of weird things and hope GCC’s -M option actually generates the right rule because the compiler itself is too stupid to figure out dependencies, but I have no IDE to tell it what to do. Then I have to link everything, and then I have to debug my program from the terminal using command line gdb, which is one of the most incredibly painful experiences I have ever had. Meanwhile, every single user experience in Linux is still terribly designed, optimized for everything except what I want to do, difficult to configure because they let you configure too much stuff and are eager to tell you in 15000 lines of manual pages about every single esoteric command no one will ever use that makes it almost impossible to find anything until you find the exact sequence of letters that will actually let you find the command your looking for and not another one that looks almost like it but does something completely different. That, of course, is if you have a web browser. I don’t know what the fuck you’d even do with man. I assume you’d have to just pipe the thing into grep just to find anything.

This is not elegant. It’s a bit sad, but mostly it’s just painful. I hate Linux. I don’t hate the kernal. I don’t hate all the functionality. It’s just that the people who use Linux do not think about user experience. They think in terms of translating command line functions into GUIs and trying desperately to hang on to whatever pretty graphics are cool and when something doesn’t work they tell you to stop using proprietary drivers from Nvidia, except the non-proprietary drivers can’t actually do 3D yet but that’s ok, no one needs that stuff. Never mind that Linux mint 12 doesn’t actually come with any diagnostic or repair tools whatsoever. Never mind that every single distro I’ve tried so far has been absolutely terrible one way or another. The more I am forced to use Linux, the more I crave for Windows and the fact that things tend to just work in Windows. Things don’t work very well in Windows, but at this point that seems better than Linux’s apparent preference of “either it works really well or it doesn’t work at all”.

We could try to point fingers, but that usually doesn’t solve anything. It’s part nvidia’s fault, it’s part software vendors fault, its partly using a 40 year old window rendering engine that’s so out of touch with reality it is truly painful, and it’s partly the users either being too dumb to care about the broken things or too smart to use the broken things. It’s a lot of people’s fault. It’s a lot of crap. I don’t know how to fix it. I do know that it is crap, and it is broken, and it makes me want to punch the next person who says Linux is better than everything in the face, repeatedly, until he is a bloody mess on the ground begging for death to relieve his pain, because there are no words for expressing how much I hate this, and if you think I’m not being fair, good for you, I don’t give a fuck. But then I can never reason with people who are incapable of listening to alternative ideas, so its usually useless to bring the topic up anyway. I suggest hundreds of tweaks to things that need to do [x] or do [x] and people are like NO THAT ISN’T NECESSARY GO AWAY YOU KNOW NOTHING. Fine. Fuck you. I’m done with this shit.

Oh wait, I still have to do my Linux homework.

EDIT: Upon further inspection, Linux Mint 12 is melting my graphics card. The graphics card fan is on, all the time, and if I run mint for too long either installed or livecd or really anything, the fan will be on the whole time and upon restart the POST check locks up. However after turning off the computer for 10 minutes and going into windows, the temperature is at 46°C, which is far below dangerous levels, so either it has a very good heatsink or the card isn’t actually melting, it’s just being run improperly, which doesn’t really make me feel any better. Either way, I am now in an even more serious situation, because I have homework to do but Linux Mint is literally breaking my computer. I’d try to fix it by switching graphics drivers but at this point every single driver available is broken. ALL OF THEM. I don’t even know what to do anymore.


'Programmer' is an Overgeneralization


*"Beware of bugs in the above code; I have only proved it correct, not tried it."* - Donald Knuth

Earlier today, I came across a post during a google-fu session that claimed that no one should use the C++ standard library function make_heap, because almost nobody uses it correctly. I immediately started mentally ranting about how utterly ridiculous this claim is, because anyone whose gone to a basic algorithm class would know how to properly use make_heap. Then I started thinking about all the programmers who don’t know what a heap is, and furthermore probably don’t even need to know.

Then I realized that both of these groups are still called programmers.

When I was a wee little lad, I was given a lot of very bad advice on proper programming techniques. Over the years, I have observed that most of the advice wasn’t actually bad advice in-of-itself, but rather it was being given without context. The current startup wave has had an interesting effect of causing a lot of hackers to realize that “performance doesn’t matter” is a piece of advice riddled with caveats and subtle context, especially when dealing with complex architectures that can interact in unexpected ways. While this broken telephone effect arising from the lack of context is a widespread problem on its own, in reality it is simply a symptom of an even deeper problem.

The word programmer covers a stupendously large spectrum of abilities and skill levels. On a vertical axis, a programmer could barely know how to use vbscript, or they could be writing compilers for Intel and developing scientific computation software for aviation companies. On a horizontal axis, they could be experts on databases, or weeding performance out of a GPU, or building concurrent processing libraries, or making physics engines, or doing image processing, or generating 3D models, or writing printer drivers, or using coffeescript, HTML5 and AJAX to build web apps, or using nginx and PHP for writing the LAMP stack the web app is sitting on, or maybe they write networking libraries or do artificial intelligence research. They are all programmers.

This is insane.

Our world is being eaten by software. In the future, programming will be a basic course alongside reading and math. You’ll have four R’s - Reading, ‘Riting, ‘Rithematic, and Recursion. Saying that one is a programmer will become meaningless because 10% or more of the population will be a programmer on some level. Already the word “programmer” has so many possible meanings it’s like calling yourself a “scientist” instead of a physicist. Yet, what other choices do we have? The only current attempt at fixing this gave a paltry 3 options that are just incapable of conveying the differences between me and someone who graduated from college with a PhD in artificial intelligence. They do multidimensional mathematical analysis and evaluation using functional languages I will never understand without years of research. I’m supposed to write incredibly fast, clever C++ and HLSL assembly while juggling complex transformation matrices to draw pretty pictures on the screen. These jobs are both extremely difficult for completely different reasons, and neither person can do the other persons job. What is good practice for one is an abhorration for the other. We are both programmers. Even within our own field, we are simply graphics programmers or AI programmers or [x] programmers.

Do you know why we have pointless language wars, and meaningless arguments about what is good in practice? Do you know why nobody ever comes to a consensus on these views except in certain circles where “practice” means the same thing to everyone? Because we are overgeneralizing ourselves. We view ourselves as a bunch of programmers who happen to specialize in certain things, and we are making the mistake of thinking that our viewpoint applies outside of our area of expertise. We are industrial engineers trying to tell chemists how to run their experiments. We are architects trying to tell English majors how to design an essay because we both use lots of paper.

This attitude is deeply ingrained in the core of computer science. The entire point of computer science is that a bunch of basic data structures can do everything you will ever need to do. It is a fallacy to try and extend this to programming in general, because it simply is not true. We are forgetting that these data structures only do everything we need to do in the magical perfect land of mathematics, and ignore all the different implementations that are built for different areas of programming, for completely different uses. Donald Knuth understood the difference between theory and implementation - we should strive to recognize the difference between theoretical and implementation-specific advice.

It is no longer enough to simply ask someone if they are a programmer. Saying a programmer writes programs is like saying a scientist does science. The difference is that botanists don’t design nuclear reactors.


Avatar

Archive

  1. 2025
  2. 2024
  3. 2023
  4. 2022
  5. 2021
  6. 2020
  7. 2019
  8. 2018
  9. 2017
  10. 2016
  11. 2015
  12. 2014
  13. 2013
  14. 2012
  15. 2011
  16. 2010
  17. 2009