A few days ago a colleague asked me the question of "when I consider that new code/logic should go behind a feature flag". I gave a short answer but truth is that using feature flags, or feature toggles is very common these days and taken for granted you know how to use them, but it is not always explained when to use them.
What follows is my humble opinion and experience, feel free to ignore it.
The single, most important fact that is usually forgotten and perverted is that feature flags are meant to be temporal, they should always be removed once fully rolled out (or replaced by a system switch, more on this later).
Feature Flags are to be used by developers and product, while System Switches are for Systems, DevOps, SREs and the like.
When coding the feature flag check and forking logic, there are two accepted paths: The first one allows to just delete the lines in the future without any further changes:
if not feature_flags.enabled(feature_flags.constants.NEW_SHINY_FEATURE): # old path (don't forget to return to avoid executing also new logic!) # new logic
And the more commonly found, which just needs code re-indentation when removing the flag:
if feature_flags.enabled(feature_flags.constants.NEW_SHINY_FEATURE): # new logic else: # old logic
If you want to know more about what are Feature Flags, this article is quite detailed.
Anyway, what I really wanted was to add to that really interesting article some related links around the topic, as in the past at TheMotion our idea was to move towards Self-Contained Systems, and at Ticketea the new checkout frontend was a "by the book" example of a micro-frontend (including
iframe-based injection on the main website).
If you're interested, check the following resources:
Author: Dale Carnegie
This is a book I've heard about many times and that more than one person recommended it to me in the past, so for a change from dev podcasts and similar for commuting I got the audio-book and just finished listening to it.
It is a curious, mixed-feelings book. One one side having a lot of good advices on varied topics, from how to deal with friendships and love, to business, to delivering hard news, to encouraging good behaviour from people that for example might be performing lower than they should... Being a revised edition I don't know if the original one had the concept of "organizational resilience", but at least some examples include the current impact of social media, Twitter, Facebook and the like, so while the original dates 1936 you won't notice it.
But on the other side, and in my personal opinion only, the book goes so much into being good with others and assuming good will and good intentions, that my personal experience rejected some points as being "way too positive". Not everything can be solved with a smile (or even deserves one), and while smiling and being considerate with others will surely help for example in negotiations, if you don't really feel it can backfire into making you appear fake and deceptive (speaking again of experience, when I've found some cases the effect has been worse).
Surely it has lots of examples of presidents and miscellaneous (many times famous) people doing amazing human acts, but I feel the book has a huge aura of "just be good with everyone" instead of more specific un-optimal scenarios and how to properly deal with them. I wasn't expecting something so full of examples and up to date as for example Managing Humans, but neither something so mystic and overly optimistic.
Still, interesting to read and containing some good advices. Just not as amazing as I expected.
I wish I could have taken more notes, but being an audio-book was hard to note anything but very concise points. Anyway, there are nice summaries online, like at Wikipedia.
I will start this blog post with two examples of non-code "hard to test" scenarios (about sight) and their solutions:
Instead of thinking "oh, I cannot see xxxxx, so I can't do it", the first scenario uses a different approach (a different sense), while the second one tries to transform the situation into one that fits our need (making the invisible man visible so we can use our sight to find him).
Now, applying this to software, in many cases hard to test things are due to code complexity, and I don't want to dwell into that as there are many articles and techniques on how to solve it, but sometimes there are scenarios that really seem hard to test or too fragile if tested traditionally.
This scenario poses a bit of a challenge because, once rendered, the canvas is just a container of pixels (or a byte array), so if we have buttons and other interactive items, how we can test them?
Well, there are three types of tests that come into my mind one can do:
First of all, if you control generating the elements, you can build a lot of tests around element generation and composition of those elements, without actually needing to care about the rendering/UI.
As I prefer to not repeat things already explained way better, there is an Uncle Bob blog post about applying TDD to terrain generation which exemplifies how to test something as hard initially as a terrain generation algorithm, and the "trick" used is to focus not on the data generated but on the generators and actions. Terrain generation is deterministic, because oversimplifying, the only things that change are input parameters and the seed, so if you test well enough those, you can replicate any terrain inside a test (although the test could get big!).
As a real world example, at a previous job I had to build a server-side avatar generation platform, and wanted to add tests to it. By making the generation code a set of deterministic actions, it was really easy to test that, given actions A, B and C the result was a "female avatar wearing a cape with a chinese temple background". You can see a small example of how the PHP code looks at this slide.
If you want to test replicating the scenario of a user browsing to your page with the canvas-based UI and interacting with it, using for example Sellenium, initially you're in trouble, as it is designed to work with standard HTML & CSS markup. It can navigate through nodes and/or CSS rules. You can also make it perform an action of moving to a certain coordinate and click (and we can easily adjust it to be inside the canvas as we at least know it's placement and size) so... how do we test it?
Just think about why Sellenium works: simply because the browser lets it know about the DOM elements and their CSS properties.
Well, we could then emit additional data useful for testing, because we control the whole process of generating the UI component. We could make each component inform of its coordinates where a user can click, so then we could make a test saying
go to position (x, y) and click, then assert the webpage has navigated to the next screen.
This is nothing new, games have been doing this for game menus since long long ago (buttons are just rectangles that detect clicks on them), and moving forward, even Google Chrome does that with what they call hit test zones:
When the compositor thread sends an input event to the main thread, the first thing to run is a hit test to find the event target. Hit test uses paint records data that was generated in the rendering process to find out what is underneath the point coordinates in which the event occurred.
(This method extends method A)
I've clicked coordinates (x, y). This combined with method A lead to being able to infer
I've clicked button A or
I've clicked the background of the canvas and have fine-control over events. This is how for example Windows handles messages and signals events over controls of programs, windows, etc. since ages.
This would allow for integration tests, where you don't need to actually render anything: You can generate the elements, feed them to a mock canvas, link it to the event-handler and test its outcomes simulating clicks at different coordinates (after all, a canvas is a list of bytes).
A problem that we had long ago at a previous job was that Sellenium tests were potentially slow and flaky when performing AJAX calls because the only way to detect those were "wait for element to be present" active waits, so we would be waiting more time than needed (to avoid waiting too few and then giving a false negative just because it took more but ended up showing). The frontend framework team solved this issue by making the JS MVC framework emit events like
I've started action xxx,
I've finished action xxx, and tweaking Sellenium to understand those and be able to respond much quicker to those AJAX loads.
I am far from an expert in testing, but I firmly believe that, given some time to think and effort to implement, most scenarios can be reduced to simpler ones that can be actually tested.
As a small shameless plug, while reading a book about maze generation algorithms, I decided to port (and extend) the examples to Python, and along the way did some testing of the basic building blocks: generating the maze grid with its cells, calculating neighbours of a cell and distances. The results are small yet practical tests, and afterwards I can focus on implementing the book algorithms and if something failed I can be certain it is not a bug on the building block, but on the algorithm being coded. Given time I could also now easily test each algorithm.
I love playing videogames, and for a long time PC has been my preferred platform. I've suffered from the old days of MS-DOS with 640KB base memory (until DOS4GW came along) to most Windows versions (until Windows 7, with which I'm staying until really forced to upgrade). I've seen come and go all kinds of installers and copy-protections up to the coming of the digital era, where I no longer need more than two hard disk drives full of my old CD/DVD games dumped to ISOs plus an internet connection to download the ones I have in digital format. I've suffered quite a few graphics driver crashes (I'm mostly looking at you, NVidia), until Windows 7 came along with user space and system space drivers, and I no longer get angry whenever most games I want to play asks me to update itself before I can have fun.
Three factors have changed the rules of play and made everything in theory easier and simpler by going digital:
All this initially sounds good and great for us gamers. For many years, only Steam was doing it, and PC sales slowly decayed until now, so marginal that I cannot conceive buying PC physical titles except when via Amazon are cheaper (and then, I just throw away everything except the CD-Key). It came to a point where most retail games are now just a Steam installer binary plus the downloaded steam data: You add the game to your collection, install it from the DVDs and then it updates with the latest patch.
But lately, other major game publishers wanted their share of the cake instead of paying Valve, and thus started to create their clients, their gamification layers (with cross-platform accounts, friend lists, achievements and the like)... and as of 2019 I currently have six game launchers installed:
And there are even more (like Bethesda launcher or the Windows Store), so this list is just a sample and it can be even worse.
I now have the fun? sad? problem of sometimes not knowing if I got a certain game on Steam, GOG or Origin (e.g. Dead Space). I also have to keep a list of user accounts, complete with 2-factor auth configurations or "authenticator apps" for each. Just compare that with any console, where you have a single, centralized store, with a single list of installed games, that you launch with a single button.
I totally understand publishers not willing to "pay the competitors" (mostly Valve) when the PC is an open platform, and of course it is so good for them controlling everything and having their games distributed digitally because they also solve the problem of resale/second-hand market (which probably hurts them way more on consoles than on PC, but anyway one less problem to care about).
But at least there's some hope, as GOG recently announced they're working on an universal PC launcher, GOG Galaxy 2.0. If this idea works, and specially if they don't attempt any self-promotion over other store titles or anything that could get the other store publishers angry, it could be our salvation for this chaos. Just by scanning your hard drive, and keeping a list of game "shortcuts", even if those then launch the corresponding secondary launcher/store and boot the game (as now Steam does with for example some Ubisoft titles launching UPlay), that would be more than enough for me. There's really no need of unified achievement systems, unified friend lists or "one shop to rule them all".
A small note: GOG Galaxy will only be for PC, so I can forget about Linux, but there at least we have some other nice alternatives like Lutris with its amazing Wine-tuned installers. Gaming on Linux is getting better but it's still far from being a viable alternative for the masses.
Now, I'm no expert on the field but I've been playing games for long and this is the only thing I really need and want: A unified "installed games library" for PC. 🤞 Fingers crossed we'll get at least that.