One of the simplest techniques for maintaining large code bases is to enforce a strict hierarchy of libraries.
I first remember reading about it well over a decade ago in “Large Scale C++ Software Design” by John Lakos. Its grown in my own usage until now my own code is organised in a literal levelled folder system.
The point is that a library should only use functions lower down than itself, so there are clear path from higher level functionality toward code with less dependencies. It also avoids cycles where X depends on Y which depends on X, it can’t happen as Y can’t depend on X.
So my libs structure is divided into levels (except my level zero libraries which live in the root of the libs structure).
Then level1 contains
And so on, a library in level 1 can only access core and binify, a level 2 library has access binny, input and math as well as core and binify.
I use cmake’s parent scope feature to inject a levels set of libraries without the levelX part into the include paths further down the chain.
This results in my app directory (where the actual program source code live) being able to access any of the libraries via library/foo.h even though it might be physically located in level2/library/foo.h. The same is true for the libraries themselves, though obviously they can only access libraries of a lower level. This makes it easy to adjust a library’s level as needed (its simple a physical move and change of 2 lines in cmakelists.txt files, I’m tempted to automate the last step so it automatically picks up movements of libraries).
I have a strict policy of include files via a #include “library/include.h” syntax even when the library part isn’t strictly necessary, this make it more visible and to me means I can treat the include path as a form of name spacing. I don’t feel bad about having the same named file in separate folders, as it obvious which is being used by the library prefix.
So for example my low level render system is a level 3 and lives in level3/render. My Vulkan implementation of that system lives in level 4 at level4/vulkan. My mid level render system lives in level4/midrender as it derives off render, the fact that behind the scenes potentially uses level4/vulkan doesn’t matter. It depends only on level 3 functionality, so its a level 4 library. My namespace follows the include path fairly closely (some differences due to case choices). The namespace are Render, Vulkan and MidRender (I choose camel case to separate from the standard library all lower case name spaces). There are texture.h files in both render and vulkan but are always referred to as “render/texture.h” or “vulkan/texture.h” so easy to tell which was intended.
Its not much work to implement and really helps as things grow, its usually fairly easy to tell where new functionality should go and stops any accidental breaking of the architecture.
Push versus Pull Renderers
The biggest choice a renderer has to make is whether its a push or pull type, however in practise most choose push and so never really think about the choice.
In these dicussions we split the HW into two types, one where the world simulation (physics, inputs, game code, etc.) which we call the CPU and another which performs computation related to rendering, the GPU. In some HW topologies its possible we use a physical CPU core as a GPU slave, and as such in this text is considered part of the GPU.
A push renderer is almost certainly what you’ve seen in 99% of places. Its defined as a system where a world database and simulation is held on the CPU and the CPU selects what to render and push the data required to render what its selected to the GPU. Its know as a ‘Push’ renderer as the CPU pushs data to the GPU.
A pull renderer is a very different beast. Here the renderer looks directly in the world database and uses that to render things to the user. A full pull renderer would have the CPU doing nothing for the scene to be rendered. The GPU decides what and how to render without the CPU doing anything, hence the GPU ‘Pulls’ render data directly.
Pull renderers are rare due to many issues, from graphics api having CPU only bindings, to the CPU hardware being the only device which has the flexibility to render something. Render APIs are almost always push biased, you have a object that you call Change State and Draw calls directly. Pull require a higher level API than currently in favour, it has to know about your chosen bounding volumes, understand your material and geometry structures potentially. It it also require choices about what to render to be in the GPU space.
A first glance this sounds like a classic render thread architecture, where a thread receives commands from the CPU and turns them into renders. However this is still a push renderer, as the CPU still has to push some data to the render thread. A pull renderer might use a render thread but if so it goes a fetchs what to render rather than just follow a command list.
First I’ll describe a PS3 pull renderer, I’ve written in the pass (should be safe from NDAs) and then some ideas for a future PC pull renderer.
The key to a PS3 pull renderer, is slaving at least one SPU to the GPU. Effectively using a SPU as a fancy front end for the GPU. Apart from writing the SPU pull code, getting the CPU synchronisation is the first part needed. A key point of any pull renderer is ensuring no data races when accessing the shared world database, the simplest method a copy inside a critical section, ensuring that accept during the lock period whilst it duplicates the world database.
Once per frame required, the CPU will issue a ‘kickoff’ signal, telling the GPU to start pulling (in theory it possible to have a free running pull renderer, but in practise it helps to have a kick off signal from the CPU side).
When signalled the GPU slave will start reading the world database and cull objects using there bounding volume and world space matrix. If visible, it decodes the render mesh data structure and writes a portion of a command list to render that object. As the GPU front end the entire render pipeline will be here, with the CPU getting back to its simulation, its entire render overhead is just the time for the signal and a lock.
I’m in the last parts of closing up my life in the UK and moving to Stockholm, Sweden to join EA SEED. Its not the first time I’ve lived abroad and to be honest, I’ve learnt a lot, so feel more confident about it.
The job itself is very exciting, SEED is a research lab looking into the future of games, which is pretty much the description of my ideal job! What I’ll actually be doing, I don’t know yet but its such an awesome remit, finding things to do won’t be an issue. More likely too many ideas and things to investigate than not having something I want to do.
The team Johann has pulled together is also a pretty awesome dream team, its full of real industry heavy hitters and many names I know. The saying that “work with people smarter than you” will be very true i suspect! I’ll also be working with Mopp (Paul Greveson) again, we worked together on Brink a few years ago at Splash Damage. Mopp is one of those (rare!) talented art/tech combo people, which makes him an amazing person to work with.
I’ve got out of the habit of updating my blog… Its been a crazy few years, the last post I made with last year!
I've also starting to use firstname.lastname@example.org as my permanent personal email.
Will be interesting to see how thegrid.io works… the idea is a AI designer will smarten the website up.