Kartones Blog

Be the change you want to see in this world

Self-Modifying code and avoiding conditionals

La Abadía del Crímen title screenshot

Between 1980 and 1990 Assembly was the most used language for everything, from videogames to nasty viruses or most of your everyday programs. After playing the game, reading and watching some technical details about La Abadía del Crímen (of which by the way there are two books about in Spanish: I & II), it's been mentioned multiple times that the original AMSTRAD CPC version was amazing technically, but that it wasn't easy to port because of the self-modifying code usages it contains.

As it sounds heretic as of today to think about patching in-memory code, I've read a bit about how it works, these scenarios being the most common ones:

  • Avoiding branching code, conditional checks, etcetera by modifying in-memory instructions to jump to a different location
  • Reusing memory structures with less code (e.g. make different characters use the same memory struct)
  • Hiding things like interrupt call or certain strings/numbers/memory addresses, mostly either for viruses or for copy protection mechanisms

Now, back in the day this made sense: Memory and CPU were so restricted, that performing an if frequently could really hurt your game, or keeping properly scoped functions with different logic pieces could mean extra precious cycles spent on pushing and popping registers from the stack [1]. But nowadays nobody would even think about something as simple as manually changing the instruction pointer (except trying to circumvent videogame protections, hacking videogame consoles and other shady areas). And even if you wanted, Data Execution Protection, memory pages protection, and the tons of caches between the code and it's real execution makes it a really bad idea to even try for anything normal.

But another point is that there are more modern ways to do something exactly the same as self-replicating code in videogames mostly try to do (avoiding conditionals):

  • Bit masks/manipulation: Old but still very valid when performance is relevant. Caveat is code is not as readable and not everything can be made bit masks...
  • Functional programming: This strictly is not removing conditionals, but you tend to reduce their usage when you think in pipelining functions and handling just input/output instead of keeping state all around.
  • Object orientation/duck typing: Different classes (or functions if the language allows) provide methods that share a same interface, and you inverse where the conditional lives (although eliminating it or not depends on how you instantiate the object): instead of doing if X then A else B, you provide either an X-object that does A or a non-X-object that does B).
  • Function handlers: Almost the same as previous point; You define a function handler, a C# delegate, you name it, and just change it by another one when you wish to modify the current behaviour. Super-simple C# example I did long ago here.

I'd definitely go for function handlers if I were to build any non-trivial game today: it is clean and very friendly towards testing at both sides, as you can inject fake AIs, and test each one in isolation.

In the end you're just modifying a function pointer to have a different address, so it is really really similar to adding or changing a JMP assembly instruction. Plus nobody will get crazy trying to debug logic that has been patched in-memory and no longer resembles the source code.

Which other ways of avoiding conditionals and/or organizing your code to avoid huge switches do you know?

[1] : Actually, those anti-object orientation "patterns" would come back with J2ME, where hardware constraints once again favoured a non-Java approach of having a single class with as much inline code, global variables and few methods as possible.

Book Review: The subtle art of not giving a f*ck


The subtle art of not giving a f*ck book cover

Title: The subtle art of not giving a f*ck

Author: Mark Manson

I usually ignore self-help books, but this title had already caught my attention due to the name when my partner asked me to buy it... and I couldn't avoid reading it.

It talks about stopping trying to force ourselves into being positive all the time, to try to avoid caring about everything and focus instead on caring on what really matters (and how that doesn't means ignoring the rest, just "not giving a f*ck about it"). It spends quite a good amount of time talking about fighting uncertainties, fears, facing our problems instead of relying on others...

But the book also spends a lot of time with stories about rock bands and war heroes that, while not boring, feel too long to exemplify a concept. The humour is sometimes is too harsh or inappropriate and it becomes quite clear than either the author had past problems with women/sex and with drugs or friends with bad habits. A few too many stereotypes and sometimes criticizable sentences or judgements balance negatively some general good advices.

Overall, interesting to read, and probably provocative on purpose, so I'll keep the interesting parts in mind (and in the notes).


  • Avoid the fixation on positive, as it reinforces and reminds of what we lack/failed to be. Accepting negative experiences is a positive experience, and makes us grow, learn, advance
  • Feedback loop from hell: e.g. "being anxious about being anxious"
  • Give less fucks, but give them to things worth giving a fuck to
  • Backwards law: pursuing the negative generates the positive: failures teach you how to understand better, pain in sports reinforces your muscles, etc.
  • Not giving a fuck doesn't mean being indifferent, means being comfortable with being different
  • You are always choosing, even when you don't choose (you choose to do nothing about it)
  • suffering and pain trigger change
  • life is full of problems, so make happiness come from seeking good problems
  • choose your struggles, seek goals you can influence and act upon
  • Everyone is great but most people are not special. Most special people, those who excel at one thing, usually only excel at that thing and sacrificed many other things and put lots of effort. If everyone were extraordinary, nobody would be
  • No special treatments, no entitlements. Everyone can suck (including us), but it is fine
  • Identify the problems that cause you suffering and try to fix them, not avoid them. Leverage your values against the suffering(s), and find good values that make you happy instead of a never-ending chase. Sample bad values: Pleasure, material success, always being right, forcibly staying positive
  • you cannot take responsibility for everything that happens on your life
  • uncertainty is inevitable. Certainty is the enemy of growth
  • discovering failures is the path for improvement
  • rejection: develop the ability to say and hear "no"
  • Learn to live with and fight Manson's law of Avoidance: The more something threatens your identity, the more you will avoid it
  • The "Do Something" Principle: Whenever stuck on something, just take action, do something and start working on it. Eventually the right ideas will come up.
  • Action isn't just the effect of motivation; it's also the cause of it
  • Action -> Inspiration -> Motivation

Book Review: How Ideas Spread


How Ideas Spread book cover

Title: How Ideas Spread

Author: Jonah Berger

Continuing with my reading and listening of topics that are not strictly technical, I recently heard this 6h audio-book.

Clearly targeted for marketing, it talks about varied topics of how products, brands, and even gossips and memes become widely spread sometimes, while other times just fade away. It is a mixture of different strategies, studies and sometimes a bit weird statistics (the baby names stuff sounds like not the best example of correlation and causality), explaining good and bad examples of advertisement campaigns, the delicate good versus bad feedback [1] or how social networks and influencers might affect sales but can also have adverse effects.

A mixed bag and not too deep, but includes interesting advices to better market products and increase brand awareness.

On measuring and adding complexity

This is a trivial example, but nonetheless useful, of why measuring code changes is relevant when we are adding complexity.

After reading about Chrome's new native image lazy-load attribute, I decided to implement it as a Pelican plugin (source code here), because despite I use Firefox, Chrome is currently the dominant browser.

I already had to iterate a few times over the original < 40 lines implementation which simply added the attribute:

I had to ignore base64-encoded inline images, plus raw markup not being really images (e.g. code containing <img>), plus small clean-ups.

As all the post images don't have width nor height attributes, Chrome was reflowing and sometimes even requesting them twice due to CSS computations, so I built a small json-backed image dimensions cache that fetches post images, checks their size using PIL, and adds the appropriate markup attributes.

Afterwards, as the post images have a maximum width, for big ones I had them vertically stretched, so I had to add another check to restrict the width and if so, recalculate the "resized height" too, keeping the original image proportions.

Notice how what was a simple "add an attribute to each image" grew to a 120 lines plugin? It is still quite small and manageable, but it has to handle error scenarios, different nitpickings that most posts won't require, and the caching not only relies on external IO (both local files and network requests) but also measures around 40% of the code.

So, when today I was thinking about how could I try to improve its speed, my first requisite was measuring the performance gain vs complexity added. Each article (post/page) in Pelican triggers an execution of the plugin logic, which already only loads and saves the cache once per post (instead of once per image). This still means acquiring two file handles, reading/writing and closing. With the existing code and this blog, this was the original metric:

Processed 500 articles, 0 drafts, 2 pages, 7 hidden pages and 0 draft pages in 8.20 seconds

Pelican plugins do not keep state by themselves, which is nice but hinders in-memory caching. A hack that I came up to overcome this limitation was storing the cache at instance.settings["IMAGE_CACHE"], because Python dictionaries are always by ref arguments. This way I could only load once the cache from file, while still storing it per article to play safe. A few lines of code added, and I measured how faster it got:

Processed 500 articles, 0 drafts, 2 pages, 7 hidden pages and 0 draft pages in 8.18 seconds

😐 Now, I don't know you, but to me waiting 20 milliseconds more when regenerating the blog... it is not much of a difference. And if I kept the optimization, I would be reducing but not fully removing the disk IO operations, while adding another crossed boundary to care about: runtime settings mutation. If It had went down a few seconds it might be worth it, but for a 0.24% gain on a personal project, I'd rather keep the plugin code simple.

And of course, if I hadn't measured it I would have just left it because it's awesome to optimize and your own code never fails even when doing hacky things and squeezing the maximum is always great and... The fact is that both my SSD hard drive and Linux are smarter than me and reading 500 times a json file so quickly and consecutively is surely buffered and cached by the drivers and will only reach the physical storage at the end, so why bothering in this scenario.

Update: What I did actually implement is a small ImageCache class with a simple dirty flag to avoid unnecessary writes if cache hasn't changed. this took down 50% of IO operations and around 2 seconds in exchange of a few extra lines of code and an actually cleaner logic segregation.

Course Review: English Fluency Master (Udemy)

Speaking and pronunciation is probably the area I'm worst at regarding English, so after doing a course about British English pronunciation, I recently grabbed and just finished another: English Fluency Master.

3 hours of tips and tricks to improve fluency: word blending and "jumping", proper pronunciation of sounds, intonation, tone, stress, and some tongue twisters. But what I liked most about the course is the teacher, Luke. The way he explains everything, repeating multiple times at different speeds, with jokes and funny voices, remarking what we should avoid, etc., is a perfect combination of informal but helpful teaching.

Equal parts enjoyable and useful, definitely recommended.

Previous entries