(Just found that this has been sitting as a Draft for the last six+months...)
Wow, another year goes by without a post. And granted, little time has gone to MMORF, what with life's other concerns and distractions. But, like the tides, my interests ebb and flow, and for now have flown back to MMORF.
I've started a skeleton structure for MMORF, abstracting the various server concepts and even putting together the roughest of text clients for testing purposes. Perhaps it's the complexity of the project, or perhaps it's old-age, but I'm finding myself needing to sketch and list out ideas and concepts a lot more than I used to...
I've been using Fogbugz and Kiln for a few months now, for a variety of projects, and have found that jotting down milestones and future ideas there instead of in in-code comments is working better than I expected. I've also realized that I really need to learn to use a tool like Visio so I can sketch out flows and connections, because my handsketched drawings just don't cut it.
Even though work happens in small bursts and fits, usually within the span of a lunchhour, I'm finding I'm learning the .NET libraries quickly and the cool C# features at the same time. With the Mono system as mature as it is, thanks to the guys at Xamarin, I feel C# and .NET are the right choice for MMORF. The Serialization support means I don't have to worry as much about a protocol between client and server or between servers, because objects can be used as the protocol and can be passed effortlessly between. Indeed, by choosing the method of serialization, I also leave open the ability to watch traffic by leaving it somewhat human-readable in XML, or streaming it in binary form for a bit of speed a little obfuscation.
As with all large projects (at least, for me), doing it so piecemeal has the big disadvantage of ramping up where you left off when you come back to it a day later. Unlike some other projects that I've been trying to work on, though, this one has been churned in my head for a long time (six-and-a-half years!) so the "where was I?" problem is definitely lessened.
Here's hoping that the next post is sooner than a year from now, and has some exciting developments!
Showing posts with label client. Show all posts
Showing posts with label client. Show all posts
Tuesday, March 13, 2012
Wednesday, July 07, 2010
Offloading the client
As I mentioned previously, I've had to start working on a bare-bones client so I have a way to test server development as it goes forth. This was a bit disconcerting, because the goal is the server, not the client, but I may have found a solution!
I currently split my rare programming time between two projects, this one and Windows Phone 7 development. I try to give them equal time, and with this project having the client development requirement, that was resulting in maybe 25% of my coding hours going to MMORF. However, I may have found a way to soon return that to the 50% it deserves, by moving the client development to Windows Phone.
For the next little while, I'll still have to put some effort into this console client, just to have that for quick testing, but for anything more than a text interface, I can certainly justify it as "Windows Phone 7 development" and use my phone development time for it. And if my other phone-related projects suffer because of it, well, I can bitch about it over on that blog.
I currently split my rare programming time between two projects, this one and Windows Phone 7 development. I try to give them equal time, and with this project having the client development requirement, that was resulting in maybe 25% of my coding hours going to MMORF. However, I may have found a way to soon return that to the 50% it deserves, by moving the client development to Windows Phone.
For the next little while, I'll still have to put some effort into this console client, just to have that for quick testing, but for anything more than a text interface, I can certainly justify it as "Windows Phone 7 development" and use my phone development time for it. And if my other phone-related projects suffer because of it, well, I can bitch about it over on that blog.
Thursday, July 01, 2010
Protocol
The whole point of this project, as I may have mentioned before, is to develop the servers, the backend, for the roleplaying framework, not the client. Of course, developing one or more clients is to be expected as I work on the project, because much like you'd expect from a blind painter, it helps to have a visual verification that the result is at all accurate; I could work on the server-side code all I want and HOPE it works, but I'd really want to SEE that it works eventually.
Knowing that, I knew that some day I'd be working on a client -- textbased, 2d, 3d, whatever -- but I never thought it'd be so soon. In fact, I'm actually writing a text-based console client before I get any of the "useful" part of MMORF written: even the server-side code that I'm writing to test the client is just login server stuff right now. Where's the good stuff??
Of course, writing the client means that I've had to address the issue of the protocol between the client and the server, which I've briefly discussed before, but have spent a bit of time thinking about already. Two things are important in a protocol, when all things are said and done: it's efficient, and it's debuggable. The second part can always be done regardless of how the protocol is implemented, but its readability corresponds to how easily it is debugged. Sure, you can write loggers and decoders to translate some binary stream to meaningful words for the developer, but when first developing the system, you're better off making it human-readable to start, and worry about the efficiency later, provided you coded the networking subsystems to handle a complete change to the protocol. This means that all communication is handled through functions that generate the protocol packets via a drop-in system, one which can be replaced without anyone being the wiser. In fact, having the client being able to support every drop-in system you've devised based on its ability to detect what's coming down the pipe means no rewriting later in the development cycle.
When I talk about a human-readable protocol, I typically mean an interchange format such as XML, JSON, YAML or even Metaplace's MetaMarkup. It's something that balances readability by humans and parseability by computers. Myself, I tend to lean towards XML, because it handles hierarchical data very well (both structurally and visually), it copes with escaping characters used by the protocol WITHIN the protocol (something which was a recurring theme with MetaMarkup), and because I've had some experience with it as a network protocol.
The only unfun part of developing a protocol is ... developing a protocol. When you get to the binary level and are trying to squeeze as much data through as possible, THEN it might actually be fun, but at the XML level, it can be tedious coming up with the schema for all of the different packets that are going to fly by.
That's why I was quite pleased to find out that the serialization of C# objects results in: XML! Instead of having to come up with an XML protocol schema, all I have to do is come up with an object schema based on all of the communication needed between client-and-server and server-and-server, then pass these objects through the network and let them pop out as objects that need to be handled. Even better, handlers for these objects can actually be part of the object itself.
It's a pretty slick system. It took a bit of designing to come up with a general server framework, allowing the client to handle various login servers (local (object-based) and remote) and client servers (local and remote), but this is done and mostly implemented. I now have a console client that launches its own local login and client servers (internal objects, not even using a loopback network connection) and can send login packets (or rather, serialized login objects). I had to design none of the protocol, instead letting the .NET Serialization system do it for me, and I had to neither write out nor read in the stream of data, deconstituting or reconstituting the meaning. Instead, I create a Login object, call my LoginServerConnection's Send() function with it, and on the other side the LoginServer sees a Login object pop out, which it may deal with as it pleases.
Just think: I'm almost at the point where I can actually design some of the game server and see it working. Amazing.
Knowing that, I knew that some day I'd be working on a client -- textbased, 2d, 3d, whatever -- but I never thought it'd be so soon. In fact, I'm actually writing a text-based console client before I get any of the "useful" part of MMORF written: even the server-side code that I'm writing to test the client is just login server stuff right now. Where's the good stuff??
Of course, writing the client means that I've had to address the issue of the protocol between the client and the server, which I've briefly discussed before, but have spent a bit of time thinking about already. Two things are important in a protocol, when all things are said and done: it's efficient, and it's debuggable. The second part can always be done regardless of how the protocol is implemented, but its readability corresponds to how easily it is debugged. Sure, you can write loggers and decoders to translate some binary stream to meaningful words for the developer, but when first developing the system, you're better off making it human-readable to start, and worry about the efficiency later, provided you coded the networking subsystems to handle a complete change to the protocol. This means that all communication is handled through functions that generate the protocol packets via a drop-in system, one which can be replaced without anyone being the wiser. In fact, having the client being able to support every drop-in system you've devised based on its ability to detect what's coming down the pipe means no rewriting later in the development cycle.
When I talk about a human-readable protocol, I typically mean an interchange format such as XML, JSON, YAML or even Metaplace's MetaMarkup. It's something that balances readability by humans and parseability by computers. Myself, I tend to lean towards XML, because it handles hierarchical data very well (both structurally and visually), it copes with escaping characters used by the protocol WITHIN the protocol (something which was a recurring theme with MetaMarkup), and because I've had some experience with it as a network protocol.
The only unfun part of developing a protocol is ... developing a protocol. When you get to the binary level and are trying to squeeze as much data through as possible, THEN it might actually be fun, but at the XML level, it can be tedious coming up with the schema for all of the different packets that are going to fly by.
That's why I was quite pleased to find out that the serialization of C# objects results in: XML! Instead of having to come up with an XML protocol schema, all I have to do is come up with an object schema based on all of the communication needed between client-and-server and server-and-server, then pass these objects through the network and let them pop out as objects that need to be handled. Even better, handlers for these objects can actually be part of the object itself.
It's a pretty slick system. It took a bit of designing to come up with a general server framework, allowing the client to handle various login servers (local (object-based) and remote) and client servers (local and remote), but this is done and mostly implemented. I now have a console client that launches its own local login and client servers (internal objects, not even using a loopback network connection) and can send login packets (or rather, serialized login objects). I had to design none of the protocol, instead letting the .NET Serialization system do it for me, and I had to neither write out nor read in the stream of data, deconstituting or reconstituting the meaning. Instead, I create a Login object, call my LoginServerConnection's Send() function with it, and on the other side the LoginServer sees a Login object pop out, which it may deal with as it pleases.
Just think: I'm almost at the point where I can actually design some of the game server and see it working. Amazing.
Friday, April 08, 2005
Never trust the client
A few days ago, in the #ddo IRC channel on SorceryNet, the topic of packet sniffers came up, with respect to the upcoming Dungeons & Dragons Online game.
The issue goes back to the more general rule, "never trust the client", or "The client is in the hands of the enemy". The fact is that no matter how much you try to hide or obfuscate the operation of your client, there are people out there that will figure it out, and they will write a cheat program for your users to use.
The solution, then, is "never trust the client" - that is, anything your client says to do, you verify as "correct" or possible. Instead of taking orders from the client, you take suggestions. And client trust doesn't even have to be in communication, where you are listening to what the client is saying. You can't even trust the client to perform in a commanded way, such as display something it should or hide something it shouldn't. The earliest case I had first heard of (as a player in the community) was in Ultima Online, where the game servers would tell the clients to darken the screen, because it was nighttime or because the player was in a dark dungeon. It was meant to provide atmosphere, but also, I presume, to add more challenge to adventuring in the darkness. I don't know how long it took, but individuals had figured out the packets being passed back and forth between the server and client, had isolated the one that said "make the screen dark", and stripped it out (or modified it to say "make the screen bright").
In the case of the conversation in #ddo, there were the issues of being able to see traps that you hadn't detected yet, or knowing of the presence of monsters that were hiding in the shadows. I, of course, stated flat out that this kind of information shouldn't be in the hands of the client until the player is supposed to know about these. This started a flurry of responses, from "yeah, but every other MMO has had the problem" to "it would be too laggy if you didn't send this stuff beforehand".
Now, I'm one of the first to grit my teeth when I'm reading gaming forums and see people say "they should add feature X. It's really easy, they just need to ..." Invariably, these people are not programmers, and if they are, they're not good ones, and if they are good, then they still don't have the knowledge to say how easy it is to add a feature to someone else's codebase. Almost as bad are people who seem to know what *can't* be done. This is, however, what the discussion became, me included.
One party insisted that game developers are going to keep doing it, even though (I assured them that) experienced developers know about these previous mistakes in this industry, and would know better. Another party insisted that they couldn't get away from sending early information to the client because of lag problems (which is a little better argument than "they'll do it because all games have it").
I disagree with them both. One, at least, is a programmer. And while I find many of the vocal "easy" people on forums try claiming programming skill as well, I usually call bollocks on their abilities, and chalk them up as know-it-alls that know nothing. This chatroom member, though, I'm willing to give credit as a capable programmer, and thus I just disagree with him.
First of all, the "there's always something that a packet sniffer can find" argument is just crap. Yes, it might be that every game so far has had some client vulnerability, but to argue that every game in the future must therefore have the same is ludicrous. The client is just an interface to the information send by and to the server. It should never have extra information that isn't displayed. Enough of that argument.
But the point about lag is valid. The example went something like "if a dozen goblins were sneaking up on you, and suddenly stepped out of the shadows, the sudden surge of data from the server to the client would cause the client to lag and the player to die (or be at a disadvantage)". Fair enough, that might happen. But, this brings up a few points:
Please, prove that one chatroomer wrong.
The issue goes back to the more general rule, "never trust the client", or "The client is in the hands of the enemy". The fact is that no matter how much you try to hide or obfuscate the operation of your client, there are people out there that will figure it out, and they will write a cheat program for your users to use.
The solution, then, is "never trust the client" - that is, anything your client says to do, you verify as "correct" or possible. Instead of taking orders from the client, you take suggestions. And client trust doesn't even have to be in communication, where you are listening to what the client is saying. You can't even trust the client to perform in a commanded way, such as display something it should or hide something it shouldn't. The earliest case I had first heard of (as a player in the community) was in Ultima Online, where the game servers would tell the clients to darken the screen, because it was nighttime or because the player was in a dark dungeon. It was meant to provide atmosphere, but also, I presume, to add more challenge to adventuring in the darkness. I don't know how long it took, but individuals had figured out the packets being passed back and forth between the server and client, had isolated the one that said "make the screen dark", and stripped it out (or modified it to say "make the screen bright").
In the case of the conversation in #ddo, there were the issues of being able to see traps that you hadn't detected yet, or knowing of the presence of monsters that were hiding in the shadows. I, of course, stated flat out that this kind of information shouldn't be in the hands of the client until the player is supposed to know about these. This started a flurry of responses, from "yeah, but every other MMO has had the problem" to "it would be too laggy if you didn't send this stuff beforehand".
Now, I'm one of the first to grit my teeth when I'm reading gaming forums and see people say "they should add feature X. It's really easy, they just need to ..." Invariably, these people are not programmers, and if they are, they're not good ones, and if they are good, then they still don't have the knowledge to say how easy it is to add a feature to someone else's codebase. Almost as bad are people who seem to know what *can't* be done. This is, however, what the discussion became, me included.
One party insisted that game developers are going to keep doing it, even though (I assured them that) experienced developers know about these previous mistakes in this industry, and would know better. Another party insisted that they couldn't get away from sending early information to the client because of lag problems (which is a little better argument than "they'll do it because all games have it").
I disagree with them both. One, at least, is a programmer. And while I find many of the vocal "easy" people on forums try claiming programming skill as well, I usually call bollocks on their abilities, and chalk them up as know-it-alls that know nothing. This chatroom member, though, I'm willing to give credit as a capable programmer, and thus I just disagree with him.
First of all, the "there's always something that a packet sniffer can find" argument is just crap. Yes, it might be that every game so far has had some client vulnerability, but to argue that every game in the future must therefore have the same is ludicrous. The client is just an interface to the information send by and to the server. It should never have extra information that isn't displayed. Enough of that argument.
But the point about lag is valid. The example went something like "if a dozen goblins were sneaking up on you, and suddenly stepped out of the shadows, the sudden surge of data from the server to the client would cause the client to lag and the player to die (or be at a disadvantage)". Fair enough, that might happen. But, this brings up a few points:
- Is the protocol so "bulky" that the information about these dozen goblins (or whatever information suddenly because available) will cause such a discernable lag? If so, can the protocol be optimized? There's a reason why MMOs aren't "twitch" games -- because of the latency of the Internet (discussed previously) and disparate speeds of players' computers - is the game too twitchy if this sudden information is a problem?
- Does all the information have to be sent immediately? Can the server not say "draw some shadowy figures - I'll let you know what they are in a sec"? Are full texture descriptions being sent up, as one of my debaters suggested, instead of them being pre-existing on the client, or send a little later?
- Can the game be designed so that any information sent early (and thus hacked) is of insignificant value? In the case of the darkness if UO, I think it might have been changed so "dark" wasn't really so bad, and it was moved from a game-affecting feature (and thus an advantage to the cheaters) to solely a mood-lighting, visual effect.
Please, prove that one chatroomer wrong.
Subscribe to:
Posts (Atom)