utorak, 6. prosinca 2016.

OpenGL 3.2 migration

It was a fight but finally it's over, I've converted game's graphics engine to OpenGL 3.2 (or at least removed functionality which was deprecated by that version). It was a fight because it took a lot more pieces in the right place to draw anything at all but it was a fight worth fighting. Now I have power of shaders at my disposal and I intend to use it.

Back in a day when I've started working with OpenGL, NeHe tutorials were the main learning material. They made stuff look simple and following them was fairly quick way to get results. For a good part that was due to simplicity of OpenGL's immediate mode and fixed pipeline. There was minimal setup code and very direct approach to specify what to draw. The setup consisted of tying OpenGL context with a window (or whatever OS of choice uses for displaying graphics, later covered by myriad of 3rd party libraries), turning desired options on and setting projection matrix (kind of like camera, defines which part of game world is visible and how is distance perceived). When it came to drawing stuff you'd simply specify which kind of polygons are you going to draw and then feed GPU with polygon vertex positions. Optionally you'd specify which texture to use (or no texture) and which color to apply.

That's all good when you are dealing with simple scenes and don't need that much performance. Feeding vertex data every frame (immediate mode) is inefficient because GPU can't do much but wait for CPU to communicate all the data. Alternative approach is retained mode where list of vertex data, much like texture image, is uploaded once and later just referred to. In this mode draw calls boil down just telling GPU which vertex list ID to crunch and GPU can get to work immediately. Additionally retained mode goes hand in hand with shaders which are an opposite of the fixed pipeline. Problem with fixed pipeline is it's "one size fits all" kind of solution where unneeded features can be disabled. Shaders on the other hand can be tailored much closer to application's needs and can provide features beyond those supported by fixed pipeline. For those and probably other valid reasons immediate mode and fixed pipeline were declared deprecated in OpenGL 3 and later.

But in order to get any graphical result from shaders and retained mode there is a lot of stuff to setup:
  • Load shader source code and compile it. One would think OpenGL doesn't deal with raw source code but that's the main way to get a shader on GPU. Of course shader source may not be valid and compilation can fail so the application has to check if this step was successful.
  • Attach and link shaders into a single program. In previous steps you just loaded various shader types (geometry, vertex, fragment) but to make them useful you have to tie them in a single unit called simply a "program". This can fail too (for example if output of one phase doesn't match with an input of the next) so application has to check this too.
  • Collect location IDs of shader attributes and uniform variables.
  • Make vertex data buffers.
  • Make "array objects". They hold information about how to feed vertex buffers to shaders, which bytes go to which attribute. This step won't fail but provides a lot of room for error.
  • Turn on options like in fixed pipeline (like depth test, alpha blending and face culling)
  • There is projection matrix calculation step too, like in fixed pipeline API but matrix management is very different. You can't use OpenGL's functions for building and switching matrices, you have to do it on your own (or use 3rd party library).

As I said, a lot more steps on top of the previous ones. If any of them fails or gets misconfigured, there will be no picture. It took me good part of the week to get it working in Stareater but when it started drawing it was like Christmas. Pieces stared falling in place one by one and I was able to experiment with some wilder ideas like drawing straight lines with circle arcs without involving infinity. For now I have mostly converted existing graphics from old system to new. Exceptions are planet orbits in system view, now they are a few smartly shaded polygons instead of ton of small quadrangles pretending to be a piece of circle arc. On top of that I've left room for improvements like applying texture to planet orbits to get gradient effect and to do the same for hexagonal grid in space combat. Also I can save texture space with a shader which recognizes which parts of ship should be painted with player color from single "sprite" image instead of requiring additional "mask" image.

It's exciting and all that but before I bring all that goodies I'd take pause from engine making in order to do something cool in game making department. After that I'll pack a release and then refactor graphical engine a bit to make scene building easier. And then the visible improvements will ensue.