Wednesday, February 17, 2010

Game Engine Architecture

I've started reading Game Engine Architecture, and the first chapter starts off with a large diagram featuring all of the subcomponents of a typical game engine. It's quite the diagram, and it allowed me to get a feel for the totality of what MMORF will become, and what will be left to client implementation.

Because I'm currently looking at C#/.NET for MMORF, the bottommost layers of the diagram -- Target Hardware, Device Drivers and Operating System -- are all abstracted away from the CLR in which MMORF will run. Whether on Windows or on a Mono-supported platform, these will hopefully not matter to me.

Third-Party SDKs and Middleware

In previous posts I've hemmed and hawed about using these in MMORF, trying to decide whether it's a cop-out to use them instead of developing my own, whether I'd just be reinventing the wheel, whether I have the time to invest in Wheel 2.0, and whether I could do a decent enough job. I still haven't decided for sure, but I'm still leaning towards coding everything myself, with an open mind to dropping in some middleware if it seems prudent. This section also includes a lot of client-side SDKs, such as OpenGL, which I'm much more open to using for any client implementation I do; writing a 3d library would be fun and all, but I know where to draw the line on priorities with my time. Unity, XNA or WebGL will gladly be leveraged in any client implementation beyond a text interface.

The AI, however, is something that I have a real interest in tackling myself, even if other parts of MMORF are eventually dumped onto other SDKs. I've got myself a growing library of game AI books and a growing appetite for coding it. So much so, in fact, that I have to avoid working on that part of MMORF as long as I can to prevent it from consuming too much time.

Platform Independence Layer

This layer, too, is abstracted away by the CLR, so it is "solved" in this respect.

Core Systems

Most of these fall under the .NET library or the C# language itself, so are provided to me. Huzzah.

Resource Manager (and Collision and Physics)

This is largely the realm of the client, dealing with graphics and rendering, but the latter part of this block, including the collision, physics and map resources, is all server-side, and will all be part of MMORF. Collision and physics are large components, ones that will certainly be implemented later in the project, as there are worlds that can implemented without them, and the scripting engine would allow world builders to code these themselves if necessary (as we saw in Metaplace when the built-in collision and physics support wasn't adequate for all cases).

Rendering Engine and Human Interface Devices

These are wholly the realm of the client. I put them together because they pretty much make up what the client is, and apart from the communication their communication to/from the server, are completely separate from the MMORF itself.

Profiling and Debugging Tools

Err, yeah... I have to admit that I'm really bad for this. I'm still an old-school printf() debugger unless it comes down to a timing issue. And profiling is an end-of-project task for me unless performance is so unbearably poor during development that time needs to be taken to improve it for the sake of continued development. Naturally, profiling MMORF will be important once it starts to see use, but in its early days, it just won't matter much.

That's the MMORF engine itself, though;.I *can* see providing tools for profiling the actual framework stats earlier, as the In-Game Menus or Console portion hints at; regardless of how lousy the performance of the actual engine is, knowing where it's spending its time -- running scripts, checking collisions, processing AI, backing up data -- will be an earlier set of profiling tools.

Animation and Audio

Both of these sections fall under the realm of the client. As I develop MMORF, I always keep the idea of the text-only client in mind, and thus look at all of these features from that point-of-view: what's the text-only client's equivalent of animation and audio? Text descriptions of what's going on, whether provided as metatext to the animation and audio data ("the guard rocks against his spear, unaware of your presence" or "the sword clangs loudly off of your shield!") or perhaps as nothing at all, emphasizing why graphical clients have become popular, delivering the image and sound to you implicitly instead of leaving it up to your imagination.

Online Multiplayer/Networking

This book is not specific to (massively) multiplayer games, but is about game engines in general, so approaches the multiplayer angle as an optional component, though one that is admittedly more common. Because of MMORF's whole purpose, this is a required component -- perhaps the defining one -- and is naturally the realm of both the client and server, since that's the whole point of the project.

Gameplay Foundation Systems

This is the core of MMORF. This is the objects in the world and their behaviours, the scripting engine behind the world, the event-handling within the world and between the world's objects. Sure, we need collision and physics on top, and file access and networking on the bottom, but this is the real body of the MMORF system, and probably the most interesting. This is the part I will likely blog about most, because file access and networking aren't exciting, and collision and physics... well, let's hope I get there.

Game Specific Subsystems

This is quite the range of topics, and at first glance, I'd be tempted to split the workload between "client" and "scripting". But part of the point of MMORF is to make the engine generic enough (a Foundation) so that the client doesn't have to have game-specific coding. The top blocks, such as weapons, power-ups, vehicles, etc., are all the realm of the scripts that power the game, and how they are presented to the player should be done in a generic enough way that any client is going to render them as well as any other game world.

Game-Specific Rendering and Game Cameras definitely feel like they fall into the realm of the client, but I think the rendering is just a function of the client's interpretation of the server's presentation of the player's view: if I look in my text client, I get a text description, in 2d I get 2d sprites rendered up to my view range, and in 3d I get nicer looking figures, textured and cropped again based on the player's view range or stats. A game camera may or may not be considered an in-game function; does the fact that I cannot see behind me in my 3d client mean that anything approaching from behind in my text client shouldn't be revealed? The idea of "facing" would be up to the game developer, and if facing wasn't important (as it isn't in most 2d and text-based games) then the 3d player either accepts this disadvantage (as a perk of the lovely rendering of his or her world) or gets indicator arrows, and rear-view mirror, or some alternative client-based device to allow 360 viewing.

And of course, collision/physics and AI once again appear, and again, would be the realm of the server, and specifically the scripting engine, whether as flags ("collision detection ON") or as function calls to be used ("walk to here directly", "walk to here avoiding being seen").


This is only the first chapter, but the rest of the book promises to be interesting. Quite a few of the chapters focus on client-side concerns, which are only secondary to me (for now), but chapter 14, on the Runtime Gameplay Foundation Systems, looks to be very pertinent, and might just be read out-of-order.