Death to monolithic game engines and editors!
The chances are if you work on a game today, you’ve used a monolithic game engine (Unreal and Unity are obvious examplers). Its has been the accepted design basically since video games engines and editors have existed.
In my years I’ve made and shipped at least 5 of them and used several more, and at one point in time it made sense and indeed there wasn’t really any choice. But moving forward I personally think we have reached the end of life of the design.
Due to an accidental historical quirk, Heavenly Sword had one part that wasn’t monolithic. Heavenly Sword’s game/engine was completely seperate from its editors, it basically had 2 editors, one like a traditional game editor and another for real-time editing (which was developed first). Towards the end the two editors had some overlap but to large extent there were 3 seperate code bases for much of development.
Literally no coupling except at the data level. It meant that people working on editor and tools didn’t have to worry too much about whether it would break the game and vice versa. It had some negatives but just clear seperation meant it was easier to work with on a day to day basis. Sure there were times it was a pain in the arse but overall with hindsight it was a good idea and I’d kill for that kind of code base today.
Whats the problem
The bigger something is the harder it is to understand and work with. Unreal and Unity code bases are huge, millions of lines of code and so have developed there own standards, ways of working and ecosystems.
Which doesn’t sound like a bad thing? And to a certain level it isn’t but as with most thing, “everything in moderation”. At this point they are so big that you end up with engine and associated editor so complex, no-one actually knows how it works even internally and game developers are often forced into using complex things that don’t do what they want. Deep change is almost impossible as the fragile coupling is across all systems.
This leads to the dreaded “Game Editor”, something so complex that everybody who uses it in whatever role has to deal with a lot of extra stuff. The “Game Editor” is a classic case of trying to be a tool for everyone in games development yet doesn’t meet the requirements of any one role. Everything is a compromise.
Why do we expect programmers, level designers, animation artists and everyone else to have to use the same tool?
The answer goes back to the original monolithic engines that the editor were created to support. In the desire to make a single ‘engine’, we also went for a single ‘editor’ as it was always an all or nothing choice.
The worse example of this is build systems where the seperation between input and runtime aren’t well defined. The build system has almost nothing to do with editor or engine but it usually comes bundled and in some cases is a part of the editor.
Why is this?
The underlaying reason has been a lack of a modular code base, C, C++ and C# were designed before the era of powerful built in code repositories systems. So often it made sense to stick everything together in one ‘uber’ project and have engine and editor as one thing.
This was also driven by an incorrect assumption about fast iteration, instead of what people wanted; fast iteration of what I’m working on, it led to designs where we have attempted fast iteration of the whole thing. Which was fine(ish) when project were smaller but as size has grown its become increasingly nuts to have (for example) programmers with 100s of Gigabytes of artist formated texture when all the really care about is runtime-formated texturing for the bit they are working on right now.
Additionally there was an general software ethos that reusing code was always a win, to not do so was a bad code smell. So developers were urged (even forced via code standards and reviews) to seek out and use existing code within the monolithic code base.
Coupling in this type of system is too easy, and so often even if though intended to be modular they end up being coupled over time as the boundrary between part A and part B were soft and maluable.
There is also a tendency to assume seperation is somehow related to OS/language features such as shared libraries, but that completely seperate from the issue. It doesn’t matter if I’m working on a static library or a shared one as long as I don’t need to browse through a huge code base.
Many monolithic code bases have tried to be modular and have at least some visibility containment features. You will probably have a core/foundation portion, an editor portion etc. but they usually lack a clear separation between what you have to look at within them. Renderers bizarelly are usually the exception due to the historical quirk that there had to be seperation due to different platform render APIs. However even for renderer interfaces, is often brings in lots of things that it doesn’t really need to have due to coupling and unclear boundaries.
If we started today, we wouldn’t have to worry so much about one huge repository, there are standard ways of seperating modules and only have the ones you need in each sub project with its dependencies pulled from source control as needed.
Also rather than always reuse, there is a greater acceptance that it has pros and cons. Yes its a good idea but sometimes its best not to if it brings in unwanted coupling and requires yet more library ‘surfaces’ to worry about.
There are a few truely global things in all parts of a game engine and editor, memory management, logging and os/platform differences are the obvious ones. But apart from that, a library or tool that converts an artist texture into runtime probably doesn’t need much shared code with any else. An editor might want to link to it, so it can provide a GUI to the conversion tool, a build system so it can schedule thousands of conversions but its largely a seperate thing from the rest of the engine and likely something the runtime doesn’t need to have.
Curiously this is the exact design Unix, 50 years ago would be based on, then it was due to low memory and having small seperated apps was a simple form of keeping memory requirements down.
This also directly corresponds to Lakos’s library layers, code should only rely on things below it in a library hierarchy with hard boundaries between libraries. I shouldn’t need to have a huge project in my IDE/editor if I only rely on half a dozen libraries.
There might be a small cost of seperation, however in most cases it probably worth it. Somethings might make a function call rather than be inlined, having templates across the boundaries of the libaries is probably not going to be allowed and so be a little slower. But is probably a cost worth having, and where its not, then you make an exception and couple a bit.
Coupling is a hard problem and without discipline it breaks down, the easy way to get that discipline is to literally have a little as possible close at hand to couple with. If you want to use something from another part of the code base, actually require a physical developer action. It might be as simple as typing the library name into a make file but its still adds a little friction.
This brings up a contraversal issue, is easy always the right choice?
I’m of the opinion that things that cost in some way, should have a little friction to force some thought from the developer. The lowest cost should be the default and make the developer think even for a few seconds about whether its worth paying the cost. And this also applies outside the programmers, make it hard to do bad things is a good ethos to have generally thoughout game development.
So for example, having to bring in a new portion of code, adds cognitive friction for everybody going forward so actual force a little friction to the person doing it. I’m not talking about a big pain, as mentioned prehaps just typing a library name into another file but it cause a pause which hopefully gives the developer time to think about it.
Todays game code bases are large and complex enough that the single most important concern is how much of that code base needs to be in your head when working on it. The reality is smaller you need to think about, the more cognitive power is spent on what you are working on rather than trying to remember if feature X already exists in N million lines of code.
This seems to go against accepted wisdom.
- Reuse as much as possible
- Code already tested is best
- Not At Home syndrome is bad.
But the reality has become for large code bases, there is a complexity level where its simply not worth finding existing code. Its not to say that you shouldn’t use complex existing code, but that if you do it needs to be worth the increased complexity entering your cognitive model of the task you are working on.
Also it might be worth refactoring to use a shared peice of tested code after its basically working rather than having to keep it in mind whilst doing main development (this of course doesn’t help future maintaince).
Is there a good rule of thumb?
I don’t know, I’d be interested in any research on the topic. If we use the classic 7 plus/minus 2 rule, it would seem to suggest that we would be best to have about ~8 library ‘surfaces’ to worry about. So i’d suggest that ideally you would want about 8 interfaces to each chunk of code that being worked on.
Which if you have a monolithic 3 million lines of code project is going to be hard. Which leads (finally) to the title of this post.
Death to monolithic game engines and editors!
What would that look like
The more we seperate things the easiest to move forward with larger and even more complex game engines and editors that will happen. I personally see this as seperate editors specialised to certain jobs.
I think having the game engine in the editor is a bad idea, keep them seperate and just send data back and forth (stream the game into the editor for editor play mode for example).
Having an animator having to use the UI which were also designed for gameplay just isn’t nice or friendly.
If I writing a new AI feature, should it require I have a code project with everything in it? I want my search to only cover the few hundred thousand lines of code that make up AI and minimal game features.
And when someone is working on a game feature, the shouldn’t need by default to see beyond the AI API. Sometimes it might help, so I press a button and look in a seperate project, but most of the time I want to think about what the AI does, not how it does it.
And this is all achievable today with static libraries and modern build system for C/C++. If your willing to goto languages like Rust or Swift you have entire ecosystems designed with micro-libraries in mind.
I seems more likely that at first one of the bigger funded non public engines will be created from scratch in this way, whereas the hard part is retro fitting such a design to the big existing engines and editors. I think it can be done but it would take serious time and management buy in for something like Unreal to be reorganised on such an modular architecture (It would be an very interesting job to architecture and lead such a team…). The technology and architecture could do it now but actually doing it to a large code base is a very hard problem. The nearest thing I can see complexity wise has been Microsoft keeping Windows updating fairly smoothly from a 16 bit DOS extender to a modern 64 bit OS with all the bells and wistles that didn’t even exist when the first line of code for Windows was written (I wonder if anything exists from the original v1, perhaps the definations of things like WPARM?).