Kartones Blog

Be the change you wanna see in this world

Micro-Frontends (and self-contained systems)

The topic of the past weeks has been Micro-Frontends, due to this ThoughtWorks article about them (if you haven't read it, stop, go read it, then come back and continue). It has caused some discussions, mostly because there seems to be a trend now towards going mono-repo with Javascript, uniting all the things to build an easier path to update dependencies for all your FE pieces.

I know very little about current frontend development, tooling and best practices, so I'm not going to judge how good or bad is a mono-repo approach on itself. I just question how fast can it be, both from experience and personal intuition, because it means that yes, you update React once and every single Javascript piece of your system is updated, but at the same time we all know that updates are often harder and full of unforseen problems, so it could become a bottleneck as it is a somewhat all-or-nothing action.

Anyway, what I really wanted was to add to that really interesting article some related links around the topic, as in the past at TheMotion our idea was to move towards Self-Contained Systems, and at Ticketea the new checkout frontend was a "by the book" example of a micro-frontend (including iframe-based injection on the main website).

If you're interested, check the following resources:

Book Review: How to win friends and influence people


How to win friends and influence people

Title: How to win friends and influence people

Author: Dale Carnegie

This is a book I've heard about many times and that more than one person recommended it to me in the past, so for a change from dev podcasts and similar for commuting I got the audio-book and just finished listening to it.

It is a curious, mixed-feelings book. One one side having a lot of good advices on varied topics, from how to deal with friendships and love, to business, to delivering hard news, to encouraging good behaviour from people that for example might be performing lower than they should... Being a revised edition I don't know if the original one had the concept of "organizational resilience", but at least some examples include the current impact of social media, Twitter, Facebook and the like, so while the original dates 1936 you won't notice it.

But on the other side, and in my personal opinion only, the book goes so much into being good with others and assuming good will and good intentions, that my personal experience rejected some points as being "way too positive". Not everything can be solved with a smile (or even deserves one), and while smiling and being considerate with others will surely help for example in negotiations, if you don't really feel it can backfire into making you appear fake and deceptive (speaking again of experience, when I've found some cases the effect has been worse).

Surely it has lots of examples of presidents and miscellaneous (many times famous) people doing amazing human acts, but I feel the book has a huge aura of "just be good with everyone" instead of more specific un-optimal scenarios and how to properly deal with them. I wasn't expecting something so full of examples and up to date as for example Managing Humans, but neither something so mystic and overly optimistic.

Still, interesting to read and containing some good advices. Just not as amazing as I expected.


I wish I could have taken more notes, but being an audio-book was hard to note anything but very concise points. Anyway, there are nice summaries online, like at Wikipedia.

  • generosity, trustiness
  • who you are and what you do makes others follow you
  • meaning before the medium
  • appreciation, honesty, sincere interest, empathy
  • increased touch points while losing touch
  • take the time to craft meaningful responses
  • try to avoid telling people they are wrong (including tones & gestures)
  • try to model/adapt those who are most influencing first to your ideas, then they'll spread it
  • ask questions instead of imposing orders
  • embrace failure/mistakes -> organizational resilience. 5 steps:
    1. acknowledge that failures happen
    2. encourage dialog to foster trust
    3. separate person from failure. project failed, not you failed
    4. learn from your mistakes
    5. create risk taking and fearless environment
  • magnify improvement
  • praise for good results, encouragement anytime (even when things go poorly)
  • make failures easy to correct

Hard to test scenarios

I will start this blog post with two examples of non-code "hard to test" scenarios (about sight) and their solutions:

  • How do blind people read when they cannot see? they touch the braille letters
  • How do you find an invisible man? You can spray paint in the room and cover him, or you use a infrared camera and spot him, or you can flood the flood with water so when he walks you see where he is

Instead of thinking "oh, I cannot see xxxxx, so I can't do it", the first scenario uses a different approach (a different sense), while the second one tries to transform the situation into one that fits our need (making the invisible man visible so we can use our sight to find him).

Now, applying this to software, in many cases hard to test things are due to code complexity, and I don't want to dwell into that as there are many articles and techniques on how to solve it, but sometimes there are scenarios that really seem hard to test or too fragile if tested traditionally.

I'm going to use an example that I read somewhere not long ago: Imagine you want to test some UI that gets rendered into an HTML5 canvas component. Assuming we have full control over the source code, meaning that we build the rendering and the associated Javascript that handles interactions with the canvas.

This scenario poses a bit of a challenge because, once rendered, the canvas is just a container of pixels (or a byte array), so if we have buttons and other interactive items, how we can test them?

Well, there are three types of tests that come into my mind one can do:

I) Unit tests

First of all, if you control generating the elements, you can build a lot of tests around element generation and composition of those elements, without actually needing to care about the rendering/UI.

As I prefer to not repeat things already explained way better, there is an Uncle Bob blog post about applying TDD to terrain generation which exemplifies how to test something as hard initially as a terrain generation algorithm, and the "trick" used is to focus not on the data generated but on the generators and actions. Terrain generation is deterministic, because oversimplifying, the only things that change are input parameters and the seed, so if you test well enough those, you can replicate any terrain inside a test (although the test could get big!).

As a real world example, at a previous job I had to build a server-side avatar generation platform, and wanted to add tests to it. By making the generation code a set of deterministic actions, it was really easy to test that, given actions A, B and C the result was a "female avatar wearing a cape with a chinese temple background". You can see a small example of how the PHP code looks at this slide.

II) Acceptance tests - method A

If you want to test replicating the scenario of a user browsing to your page with the canvas-based UI and interacting with it, using for example Sellenium, initially you're in trouble, as it is designed to work with standard HTML & CSS markup. It can navigate through nodes and/or CSS rules. You can also make it perform an action of moving to a certain coordinate and click (and we can easily adjust it to be inside the canvas as we at least know it's placement and size) so... how do we test it?

Just think about why Sellenium works: simply because the browser lets it know about the DOM elements and their CSS properties.

Well, we could then emit additional data useful for testing, because we control the whole process of generating the UI component. We could make each component inform of its coordinates where a user can click, so then we could make a test saying go to position (x, y) and click, then assert the webpage has navigated to the next screen.

This is nothing new, games have been doing this for game menus since long long ago (buttons are just rectangles that detect clicks on them), and moving forward, even Google Chrome does that with what they call hit test zones:

When the compositor thread sends an input event to the main thread, the first thing to run is a hit test to find the event target. Hit test uses paint records data that was generated in the rendering process to find out what is underneath the point coordinates in which the event occurred.

III) Acceptance tests - method B

(This method extends method A)

Another way to do tests is, assuming you control the source code of the Javascript event-handling code, combining the previous method of emitting coordinates data with making the event-handler code also emit events, like I've clicked coordinates (x, y). This combined with method A lead to being able to infer I've clicked button A or I've clicked the background of the canvas and have fine-control over events. This is how for example Windows handles messages and signals events over controls of programs, windows, etc. since ages.

This would allow for integration tests, where you don't need to actually render anything: You can generate the elements, feed them to a mock canvas, link it to the event-handler and test its outcomes simulating clicks at different coordinates (after all, a canvas is a list of bytes).

A problem that we had long ago at a previous job was that Sellenium tests were potentially slow and flaky when performing AJAX calls because the only way to detect those were "wait for element to be present" active waits, so we would be waiting more time than needed (to avoid waiting too few and then giving a false negative just because it took more but ended up showing). The frontend framework team solved this issue by making the JS MVC framework emit events like I've started action xxx, I've finished action xxx, and tweaking Sellenium to understand those and be able to respond much quicker to those AJAX loads.


I am far from an expert in testing, but I firmly believe that, given some time to think and effort to implement, most scenarios can be reduced to simpler ones that can be actually tested.

As a small shameless plug, while reading a book about maze generation algorithms, I decided to port (and extend) the examples to Python, and along the way did some testing of the basic building blocks: generating the maze grid with its cells, calculating neighbours of a cell and distances. The results are small yet practical tests, and afterwards I can focus on implementing the book algorithms and if something failed I can be certain it is not a bug on the building block, but on the algorithm being coded. Given time I could also now easily test each algorithm.

The state of PC gaming in 2019

I love playing videogames, and for a long time PC has been my preferred platform. I've suffered from the old days of MS-DOS with 640KB base memory (until DOS4GW came along) to most Windows versions (until Windows 7, with which I'm staying until really forced to upgrade). I've seen come and go all kinds of installers and copy-protections up to the coming of the digital era, where I no longer need more than two hard disk drives full of my old CD/DVD games dumped to ISOs plus an internet connection to download the ones I have in digital format. I've suffered quite a few graphics driver crashes (I'm mostly looking at you, NVidia), until Windows 7 came along with user space and system space drivers, and I no longer get angry whenever most games I want to play asks me to update itself before I can have fun.

Three factors have changed the rules of play and made everything in theory easier and simpler by going digital:

  • Steam got so popular and big that it became the de facto distribution platform for the majority of games, and a really high percentage of indie titles.
  • Piracy, (often terrible) copy protections for games and broadband connections made very convenient to ditch out physical media and instead do server-side checks.
  • Game sizes are growing at a rate that not even a single Blu-Ray disk can contain the full game data, with all the multi-language assets. Plus it would be very risky to force everyone to buy a Blu-Ray reader, when laptops and most desktop rigs no longer have any optical drive at all.

All this initially sounds good and great for us gamers. For many years, only Steam was doing it, and PC sales slowly decayed until now, so marginal that I cannot conceive buying PC physical titles except when via Amazon are cheaper (and then, I just throw away everything except the CD-Key). It came to a point where most retail games are now just a Steam installer binary plus the downloaded steam data: You add the game to your collection, install it from the DVDs and then it updates with the latest patch.

But lately, other major game publishers wanted their share of the cake instead of paying Valve, and thus started to create their clients, their gamification layers (with cross-platform accounts, friend lists, achievements and the like)... and as of 2019 I currently have six game launchers installed:

  • Activision Blizzard's Battlenet
  • EA's Origin
  • Epic Games Store
  • GOG Galaxy
  • Steam
  • UbiSoft's UPlay

And there are even more (like Bethesda launcher or the Windows Store), so this list is just a sample and it can be even worse.

I now have the fun? sad? problem of sometimes not knowing if I got a certain game on Steam, GOG or Origin (e.g. Dead Space). I also have to keep a list of user accounts, complete with 2-factor auth configurations or "authenticator apps" for each. Just compare that with any console, where you have a single, centralized store, with a single list of installed games, that you launch with a single button.

I totally understand publishers not willing to "pay the competitors" (mostly Valve) when the PC is an open platform, and of course it is so good for them controlling everything and having their games distributed digitally because they also solve the problem of resale/second-hand market (which probably hurts them way more on consoles than on PC, but anyway one less problem to care about).

But at least there's some hope, as GOG recently announced they're working on an universal PC launcher, GOG Galaxy 2.0. If this idea works, and specially if they don't attempt any self-promotion over other store titles or anything that could get the other store publishers angry, it could be our salvation for this chaos. Just by scanning your hard drive, and keeping a list of game "shortcuts", even if those then launch the corresponding secondary launcher/store and boot the game (as now Steam does with for example some Ubisoft titles launching UPlay), that would be more than enough for me. There's really no need of unified achievement systems, unified friend lists or "one shop to rule them all".

A small note: GOG Galaxy will only be for PC, so I can forget about Linux, but there at least we have some other nice alternatives like Lutris with its amazing Wine-tuned installers. Gaming on Linux is getting better but it's still far from being a viable alternative for the masses.

Now, I'm no expert on the field but I've been playing games for long and this is the only thing I really need and want: A unified "installed games library" for PC. 🤞 Fingers crossed we'll get at least that.

Naming bugs for fun

Going for a walk today, I remembered how, at previous jobs, we used to give funny names to certain types of bugs that either happened from time to time, or were different and posed unique challenges. I'm probably missing more, but the following small list are all that I recall :)

Cinderella bug: For example, a bug (or failing test) that only presents himself around midnight. Usually relates to datetime issues.

Spring/Fall bug: Variant of the previous one, caused by dailight saving. Not handling timezones can also help making it (or other variants) happen.

The poltergeist: You feel it, you sense it but you cannot see its source.

Schrödinger's variables: Issues with compilers or interpreters causing broken data structures to crash and/or report having and not having multiples values at the same time when inspected. I have a precise example of this one from late 2010/early 2011, in which I got Javascript error under Internet Explorer 9 beta incorrectly reporting success:true for a scenario that was erroring. It said it was a boolean but it's value was neither true nor false, and at the same time was not true and not false. Here is the screenshot I took for fun:

IE9 Beta Schrödinger's boolean

Gotta catch them all bug: When somebody applies Pokemon exception handling (capture Exception, Error or the corresponding language equivalent to the parent exception type) and the real bug gets shallowed by the handler, either raising a different error or being just hidden until found.

The It works on my machine!: Bug that surfaces on environments other than local/dev. More often than it should applies to tests, breaking when run via CI but not locally. Usually relates to to diferent configurations, different packages, or even different operating systems (Mac vs Linux, Ubuntu vs Alpine, ...).

The orphan: nobody takes responsability/ownership of this bug so it remains unfixed.

The destroyer of worlds: fatal bug that either crashes the whole system or breaks thousands of tests. Writing the post I found the alternate name Hindenbug.

The Hydra: A bug that, upon fixing, causes (or simply uncovers) more bugs to appear.

The Yeti: A bug reported, probably multiple times, but that nobody is able to reproduce/find it. Alternate name: Loch ness monster bug

The inmortal: A bug no matter how many times you try to kill, you're never able to fully get rid of.

Heisenbug: A bug that changes how it behaves when you try to triage or debug it.

The Padron pepper bug: A joke of a famous spanish food, of which some peppers are hot and some not. Applied to bugs, a bug that is not deterministic, sometimes happens and sometimes not.

Previous entries