Between 1980 and 1990 Assembly was the most used language for everything, from videogames to nasty viruses or most of your everyday programs. After playing the game, reading and watching some technical details about La Abadía del Crímen (of which by the way there are two books about in Spanish: I & II), it's been mentioned multiple times that the original AMSTRAD CPC version was amazing technically, but that it wasn't easy to port because of the self-modifying code usages it contains.
As it sounds heretic as of today to think about patching in-memory code, I've read a bit about how it works, these scenarios being the most common ones:
Now, back in the day this made sense: Memory and CPU were so restricted, that performing an
if frequently could really hurt your game, or keeping properly scoped functions with different logic pieces could mean extra precious cycles spent on pushing and popping registers from the stack . But nowadays nobody would even think about something as simple as manually changing the instruction pointer (except trying to circumvent videogame protections, hacking videogame consoles and other shady areas). And even if you wanted, Data Execution Protection, memory pages protection, and the tons of caches between the code and it's real execution makes it a really bad idea to even try for anything normal.
But another point is that there are more modern ways to do something exactly the same as self-replicating code in videogames mostly try to do (avoiding conditionals):
if X then A else B, you provide either an
I'd definitely go for function handlers if I were to build any non-trivial game today: it is clean and very friendly towards testing at both sides, as you can inject fake AIs, and test each one in isolation.
In the end you're just modifying a function pointer to have a different address, so it is really really similar to adding or changing a
JMP assembly instruction. Plus nobody will get crazy trying to debug logic that has been patched in-memory and no longer resembles the source code.
Which other ways of avoiding conditionals and/or organizing your code to avoid huge switches do you know?
 : Actually, those anti-object orientation "patterns" would come back with J2ME, where hardware constraints once again favoured a non-Java approach of having a single class with as much inline code, global variables and few methods as possible.
Author: Mark Manson
I usually ignore self-help books, but this title had already caught my attention due to the name when my partner asked me to buy it... and I couldn't avoid reading it.
It talks about stopping trying to force ourselves into being positive all the time, to try to avoid caring about everything and focus instead on caring on what really matters (and how that doesn't means ignoring the rest, just "not giving a f*ck about it"). It spends quite a good amount of time talking about fighting uncertainties, fears, facing our problems instead of relying on others...
But the book also spends a lot of time with stories about rock bands and war heroes that, while not boring, feel too long to exemplify a concept. The humour is sometimes is too harsh or inappropriate and it becomes quite clear than either the author had past problems with women/sex and with drugs or friends with bad habits. A few too many stereotypes and sometimes criticizable sentences or judgements balance negatively some general good advices.
Overall, interesting to read, and probably provocative on purpose, so I'll keep the interesting parts in mind (and in the notes).
Title: How Ideas Spread
Author: Jonah Berger
Continuing with my reading and listening of topics that are not strictly technical, I recently heard this 6h audio-book.
Clearly targeted for marketing, it talks about varied topics of how products, brands, and even gossips and memes become widely spread sometimes, while other times just fade away. It is a mixture of different strategies, studies and sometimes a bit weird statistics (the baby names stuff sounds like not the best example of correlation and causality), explaining good and bad examples of advertisement campaigns, the delicate good versus bad feedback  or how social networks and influencers might affect sales but can also have adverse effects.
A mixed bag and not too deep, but includes interesting advices to better market products and increase brand awareness.
This is a trivial example, but nonetheless useful, of why measuring code changes is relevant when we are adding complexity.
After reading about Chrome's new native image lazy-load attribute, I decided to implement it as a Pelican plugin (source code here), because despite I use Firefox, Chrome is currently the dominant browser.
I already had to iterate a few times over the original < 40 lines implementation which simply added the attribute:
I had to ignore base64-encoded inline images, plus raw markup not being really images (e.g. code containing
<img>), plus small clean-ups.
As all the post images don't have
height attributes, Chrome was reflowing and sometimes even requesting them twice due to CSS computations, so I built a small json-backed image dimensions cache that fetches post images, checks their size using
PIL, and adds the appropriate markup attributes.
Afterwards, as the post images have a maximum width, for big ones I had them vertically stretched, so I had to add another check to restrict the width and if so, recalculate the "resized height" too, keeping the original image proportions.
Notice how what was a simple "add an attribute to each image" grew to a 120 lines plugin? It is still quite small and manageable, but it has to handle error scenarios, different nitpickings that most posts won't require, and the caching not only relies on external IO (both local files and network requests) but also measures around 40% of the code.
So, when today I was thinking about how could I try to improve its speed, my first requisite was measuring the performance gain vs complexity added. Each article (post/page) in Pelican triggers an execution of the plugin logic, which already only loads and saves the cache once per post (instead of once per image). This still means acquiring two file handles, reading/writing and closing. With the existing code and this blog, this was the original metric:
Processed 500 articles, 0 drafts, 2 pages, 7 hidden pages and 0 draft pages in 8.20 seconds
Pelican plugins do not keep state by themselves, which is nice but hinders in-memory caching. A hack that I came up to overcome this limitation was storing the cache at
instance.settings["IMAGE_CACHE"], because Python dictionaries are always by ref arguments. This way I could only load once the cache from file, while still storing it per article to play safe. A few lines of code added, and I measured how faster it got:
Processed 500 articles, 0 drafts, 2 pages, 7 hidden pages and 0 draft pages in 8.18 seconds
😐 Now, I don't know you, but to me waiting 20 milliseconds more when regenerating the blog... it is not much of a difference. And if I kept the optimization, I would be reducing but not fully removing the disk IO operations, while adding another crossed boundary to care about: runtime settings mutation. If It had went down a few seconds it might be worth it, but for a 0.24% gain on a personal project, I'd rather keep the plugin code simple.
And of course, if I hadn't measured it I would have just left it because it's awesome to optimize and your own code never fails even when doing hacky things and squeezing the maximum is always great and... The fact is that both my SSD hard drive and Linux are smarter than me and reading 500 times a json file so quickly and consecutively is surely buffered and cached by the drivers and will only reach the physical storage at the end, so why bothering in this scenario.
Update: What I did actually implement is a small
ImageCache class with a simple
dirty flag to avoid unnecessary writes if cache hasn't changed. this took down 50% of IO operations and around 2 seconds in exchange of a few extra lines of code and an actually cleaner logic segregation.
Speaking and pronunciation is probably the area I'm worst at regarding English, so after doing a course about British English pronunciation, I recently grabbed and just finished another: English Fluency Master.
3 hours of tips and tricks to improve fluency: word blending and "jumping", proper pronunciation of sounds, intonation, tone, stress, and some tongue twisters. But what I liked most about the course is the teacher, Luke. The way he explains everything, repeating multiple times at different speeds, with jokes and funny voices, remarking what we should avoid, etc., is a perfect combination of informal but helpful teaching.
Equal parts enjoyable and useful, definitely recommended.