Monday, January 28, 2019

battleMETAL - And Yet, Somehow it all works - part 3 - Animatics

Quite an animated conversation

One of the feature’s I wanted to exist in battleMETAL was that of ‘animatics.’ For background, an ‘animatic’ is like a glorified slideshow. These are usually defined as a series of static images displayed in a sequenced order - like a super slow animation. Why would battleMETAL need these exactly? Why for character transmissions of course. I wanted characters to be able to talk at the player, or about the player to other characters in a way that player could visually see and understand. Radio transmissions that don’t show some sort of image are hard to follow in the ebb/flow of a video game. The concept of a ‘radio operator character’ for players to interact with is also a long-established mechanic in gaming.


Part of the original scope of battleMETAL was that the player won’t do any interacting with characters outside of combat. This was mainly due to time and resource constraints, but also because of the story and who the player is in this world. However, the player still needs some things to go off of when playing the game, so I felt that I could implement workable animatics to cover this gap. The system I finished is ‘good enough’ but probably could use some refactoring to make it ‘best’.


I started with CSQC, the animatics being an entirely client-side event, in my mind. I created a short set of api functions to handle the flow of the animatic system. The overall design is something along the lines of:
 

    Receive event from server
    Load animation file
    Validate file
    Setup playback variables
    Render frame 1 
    Render next frame
    End playback


The kick off is the server sending a command to the client to begin an animatic event. Quake was designed from ground up as a client-server game even in single player. Single Player in vanilla Quake is just a local game server with a max player count of 1. So even though battleMETAL only has 1 player in its server, the code still treats that player like any other remote-connect client in the code, which I think is a good thing. On the server, I created a custom map object, event_animatic that can be triggered by player touch or by other map objects, which sends a command to the target client to begin the animatic.


When the player’s CSQC receives the command, in this case changing the player’s “state” variable to _ANIMATIC, the CSQC begins the playback of the animatic. First step is loading the file. I decided that storing animatics in plaintext files was super handy both for readability and performance. The game engine doesn’t need to keep possibly dozens of animatic scripts in memory during gameplay, and because animatics are a low-delta event with no read/write commands, having the game load them from text files seemed like a good approach. I’ve always enjoyed the JSON syntax for data storage, considering it a better alternative to XML. Leveraging the Darkplaces source port’s ability to parse text files, I whipped up a crude JSON-style parsing function. Besides, its always fun making your file extensions.

So in test.anim we see the following:

{
  'music' : ,
  'nomusic' : 0,
  'backimg' : ,

  'trans' : ,
}

The first { } is always the ‘metadata’ tags for the animatic file. Music is which sound you want to play when the entire animatic starts up, and this will play until the end of the animatic. NoMusic will shut off any CD music playing in the background ( don’t worry, it’ll resume the stopped music when the animatic finishes). BackImg is if you want 1 background image to be rendered beneath all subsequent image frames. Finally, Trans is for ‘transparency’, setting the global alpha value of the animatic during playback. These values are then stored in a 1-dimensional string array labelled simply named ANM_META_DATA. Once the metadata is loaded the text parser expects frame data to come next. Frames are defined by { } as well, and there’s no true limit, for the sake of brevity I imposed a 20 frame max (do you really need moar?).


{
  'image' : gfx/hud/target_box.png,
   'pos' : 0.79 0.225,
   'size' : 0.05 0.05,
   'sound' : sound/anim/t1m4_lineb.ogg,
   'text' : ,
   'alpha' : 0.85,
   'color' : 1 0 0,
   'text_color' : 0 0 0,
   'time' : 3,
}

Image is which image you want this frame to display, and you can set it to null / empty.

Pos is ‘screen position’ important: this is done in percentage of screen to make sure that the coordinates are screen-size agnostic.

Size is the size of the image. important: this is done in percentage of screen to make sure that the coordinates are screen-size agnostic.

Sound is the sound file you want to play on this frame. Its non-looping, and is not clipped by any sound on the next frame, so playback overlap is a risk here.

Text is any text you want rendered, I have this hardcoded to be rendered at the bottom of the screen and centered.

Alpha is the transparency of the frame.

Color is the color-tint you want to apply to the frame image and text. RGB values are in vector format between 0.001 - 1.0 for each color.

Text_color same as Color but overrides the color for the text.

Time how many seconds to render the frame for.


In addition to these tags, the code automatically does a fade-in/fade-out effect for each frame but I’m still on the fence if this is necessary or not. When the text parser reads these tags into the code, the frame reference number is used as the primary key for a series of arrays. I’m not particularly happy with this solution but I’ve chalked it up to the roughness of Quake C more than anything else….

string ANM_FRAME_IMG[20];
string ANM_FRAME_SND[20];
float ANM_FRAME_TIM[20];
vector ANM_FRAME_POS[20];
vector ANM_FRAME_SIZE[20];
string ANM_FRAME_MSG[20];
float ANM_FRAME_ALPHA[20];
vector ANM_FRAME_COLOR[20];
vector ANM_FRAME_TXT_CLR[20];


Did I mention that any sort of collections are non-existent in Quake C?

The outcome for all this is a simple but working system for achieving cutscenes and character transmissions for battleMETAL. There’s still some refinement to be had mostly due to a lack of use and testing of the code, but as it stands I’ve tested that the code works at all.

Monday, January 21, 2019

battleMETAL - And Yet, Somehow it all works - part 2 - HUD

Heads up!?

Now that we’re sort of familiar with CSQC and what its about, we can take a look at the HUD for battleMETAL. There were 2 distinct phases to arriving at the HUD code that is now in the game. The first step was expanding the GUI functions I had created for the in-game menus that we saw in last the article. battleMETAL’s DNA is western mech sims of the 90’s, and that genre of games loved its HUD mechanics.


a HUD from Earthsiege 2


It seems in hindsight that mastery of reading a mech’s HUD was integral to the overall gameplay experience of a mech sim, given how much information is being sent to the player. One of my opinions as to why mech games lost market share over time was their built-in complexity that scares away newcomers, much like how Starcraft II today.


My first attempt at a HUD system was to take the generic GUI functions I had made, and craft a single HUD for each mech. The entry point for the HUD system was and is a single function call in CSQC_update_view(). I pass the player unit type to the client, and if that unit type is ‘mech’ then it runs the hud_frame() function. In the first system, I created HUDs as entity objects in CSQC, believing it to be the easiest way to hold data and functions for each HUD. You can kinda see the madness here on this github link to the battleMETAL project. Each HUD object implemented the same ‘soft’ interface of each hud element function, along with an initializer function that setup each object.


Now, in a more modern engine or code base, this isn’t exactly a bad idea. A proper class object for each HUD would be a fine way of rendering the HUD. Over in Quake C land, I was not so fortunate - there’s only 1 object close to being a class, the entity, and we all know now they’re not really the same. This attempt ended up repeating a ton of boiler plate code, and overall was too unwieldy. Tacitly, I had made some out-of-scope assumptions about what the HUD should be able to accomplish as a system. It was good that I coded it in a direction towards a robust UI system, one should always code for universality. I realized later that the HUD didn’t need this universality, it didn’t need an open system for rendering layered UI graphics...it needed to be bespoke. Quake C’s limitations have a tendency to hone your design instincts to one-off solutions for each module.


The next step in coding the HUD system was to disabuse myself of trying to make an object-based, layered HUD system. Rather, I decided to reorient the design to being built up from simple functions. I realized that each mech HUD doesn’t really have unique functionality that would ever really differ from another HUD. That is to say, mech HUDs all contain the same information where the only differences are slight variety in presentation and position on-screen of the HUD elements.

It took about a weekend, but I refactored every single piece of HUD code. Rebuilt from the original pieces, I ended up with unique functions for specific pieces of the HUD. A few examples to explain what I’m getting at:

hud_renderEnergyMeter()


Each only deals with rendering a single type of HUD component. The method arguments for each also varies only by the information that each component needs for rendering. The responsibility for drawing the total HUD then shifts up to the main HUD function. This main function is named for the mech that it is supposed to go to, and I used a switch-case statement to determine which HUD is supposed to be drawn. When the player enters their mech, the server sends the mech’s id number to the client, and the switch-case statement selects the function by mech id.


Therefore, any given HUD main function becomes a short list of HUD component functions, the only important data that matters is the on-screen location of the HUD elements and the player data. This approach even allows a little bit of flexibility. To make more unique HUD elements, the code can either encapsulate the component in just the desired HUD or add it to the HUD function library which would then allow any HUD to use it if desired. I applied this principle at least once with the renderWeapon functions.



hud_renderWeapon1


hud_renderWeapon2


Both functions take the same information but render this information in a slightly different way. We can see that each weapon is rendered atomically this way which then also allows the designer to use both styles in the same HUD. In keeping with the modular approach, this entire set of code is only called by a single entry function renderHUDFrame() which keeps coupling between the main functionality and the HUD system loose. This reduces headaches in adding new features to either system, or when changing large pieces of either system. I had a decent amount of fun bringing the HUD system to life for battleMETAL, and I think the code reflects it. Adding new HUDs is straightforward and maintainable while troubleshooting existing HUDs wont outright break too much else.

Monday, January 14, 2019

battleMETAL - And Yet, Somehow it all works - part 1 - CSQC



CSQ-what?

It's hard to go into complete detail about this powerful module for Darkplaces (though I feel obligated to say this module is not exclusive to Darkplaces, and is available in other modern source ports of Quake such as FTE).

I probably already mentioned this earlier in the series, Client-Side Quake C, that is. Recall that Quake C compiles to file of bytecode called progs.dat. This file contains all of the custom game code that the Quake engine runs when the player starts up the game. Client-side Quake C (hereafter referred to as CSQC ) takes this concept and applies it to the client code during gameplay. What this means is that the programmer gets the full power of Quake C - spawning entities, File I/O, player state, player input and more; on the client-side. Want to load custom UI data but don’t want to have to broadcast over the network? CSQC. Custom sound effects played only for the specific client? CSQC. You get the idea.

One of the most powerful features CSQC has is the ability to draw images on the player’s Heads up display, or HUD. Like most features of the source ports of Quake, this is a series of api hooks that rely on the programmer to leverage effectively. The design ambiguity offers freedom however. Using the basic image draw calls, and some input tracking code, I was able to implement a basic Graphical User Interface (GUI) that runs during gameplay. I was also able to make a wonderful mech-style HUD that shows the player most of the information they’ll need to know when piloting the big stompy robots. This post deals mostly with implementing the in-game GUI which ironically is different from the Main Menu system.

CSQC has the following methods that define the skeleton of the system -

    CSQC_Init()

    CSQC_InputEvent()

    CSQC_UpdateView()

    CSQC_Parse_StuffCmd()

There are a few more, but these are outside the scope of this post. A quick breakdown of these functions. _init() is the first function called, specifically once the client connects to the server. Its when the CSQC context begins, semantically it's a great place to put any code you want to be initialized as soon as possible. The InputEvent() handles all player input during gameplay. It returns a boolean where TRUE means the input event was handled by CSQC and thus does not need to be passed onto the server. UpdateView() is the big one, this is where all render calls for custom GUI / HUD / anything else you want drawn get placed. You can also modify the view angles, location, and Field of View for the client as well. Lastly, StuffCmd() is used for handling text-based console commands that come from the server.

I built the in-game menus off a few basic presumptions: tracking player mouse input, the on-screen location of the mouse, a modest set of UI functions, and some variables to track the state of the menu. The trickiest implementations were the modest set of ui functions - end result wasn’t so modest. I was able to get things like lists to work; you can see examples in the Mech Hangar and Arming menus. However I never got ‘scrolling’ lists to work especially due to not feeling the feature was needed in the first place. I was able to ensure that the UI scaled to the player’s display resolution...though my approach was hacky.


I chose a resolution, 1240 x 960, and created all the UI’s to this scale including their screen coordinates. Next, I converted the size or location value into a percentage of (x / 1240), (y / 960). Then, these percentages are applied to the player’s screen resolution. Therefore, a UI element that is 75% of the total screen width will always be 75% of the total screen width regardless of the actual number that screen width is. Its brute-force, but it worked solidly enough to base a menu system on.

The core of any given menu is a set of functions that in a real language would be an ‘interface’, a contract of functions that each menu object would have to implement. Each menu has a _DrawFrame(), _Listener() functions. The menu system uses a switch() control and a global variable MENU_CHOICE to determine which _DrawFrame and Listener function to call. Let’s use the hangar example, we see mechHangarDrawFrame() and mechHangarListener(). Inside each of these functions, we put all the UI elements we’d like to draw for that menu; mechHangarDrawFrame() ends up looking like

     mechHangarDrawFrame(){

         local vector topleftroot;
         topleftroot = VIEW_ORG;

        drawpic(topleftroot, UI_DEF_BOX_512, VIEW_MAX, CLR_DEF_UI_HANGAR, 1,0);

        menu_hangar_MechDisplay(topleftroot + gui_percentToPixelRawVec('0 24'));
        menu_hangar_MechList(topleftroot + gui_percentToPixelRawVec('0 24'));
        menu_hangar_MechInfo(topleftroot);

         menu_hangar_MechFluff(topleftroot);

         menu_hangar_MechHPoints(topleftroot);

     }


drawPic() is a CSQC function for drawing 2D images onto the player’s screen. The other functions are made by me, and each renders a different part of the Mech Hangar GUI so that the end result looks like:

 

The second part is reading player input when the menu is active. For each menu, there’s a distinct function, and in our example its mechHangarListener(). This function contains all the input-related behavior that the code will call when the player inputs something.

     mechHangarListener(){
         mechSelectListener()
     }

Where mechSelectListener() is ‘listens’ for player input on a specific UI element. Quake C is fairly rudimentary, so the means for capturing player input in-context of a GUI was crude. For starters, the code I wrote breaks the screen up into rectangles. The specific Listener function will give the coordinates for the rectangle. When the player clicks the mouse, the listener code then checks to see if the mouse’s screen location is inside the rectangle that it defined. If the mouse is inside the rectangle, it returns the TRUE value, and again it's up to the listener function to handle this result. For lists of objects, such as the list of playable mechs, the ListListener function actually returns the index number of the item on the list. So if the player chose the second Light Mech, then the listener return value would be 2.

There are some drawbacks to such a simple menu setup; the biggest being items cannot be layered on top of each other. Each menu is only ever 1 ‘layer’ of UI elements deep. Another one is scrolling lists, I’m sure with some extra effort, getting the scrolling lists wouldn't be too bad. Finally, there is no way to drag-and-drop items (because there aren’t any ‘layer’ of menu). If I had more time for polish, I would have added some drag-and-drop functionality, especially for the Arming menu.



Monday, January 7, 2019

battleMETAL - And Yet, Somehow it all works!

And Yet it moves…

       So the last set of blog posts laid out some of the daunting challenges that I faced over the course of the project. Darkplaces / Quake when the project started in mid-2016 was fairly old, but only when working through battleMETAL did I realize how old. 


      However, all is not doom and gloom! In this next sections I will talk about the cool success stories. We will get into neat things like the various iterations of the player’s Heads-Up Display, or the game save system. Overall I’d say that in many ways, Quake gives you enough rope to trip yourself with...over a cliff...into the ravine. It's a shoddy metaphor but it is sort of true. Compared to modern game design sensibilities, Quake is clearly lacking but because it starts out so empty you end up with an odd freedom of action to implement your own stuff.

       I’m not going to say it was entirely smooth sailing, but I’d like to believe that because of the inherent freedom of the code base, I was able to get a lot of mileage out of such an old engine. I would also like to credit the folks over at insideqc. They have created a great repository of go-to examples and an extensive forum post history that answers most of the straightforward questions you might have about Quake C. If I ran into a situation where I couldn’t find an answer to my question online, I’d go to the fallback - reading Darkplaces source code over on github


So onto the showcase of features that I bashed into an engine whose core is 22 23 years old! (its 2019 now...d'oh)