Kartones Blog

Be the change you wanna see in this world

Course Review: Perfect English Pronunciation: British English (Udemy)

I was searching for a course about improving pronunciation, when I found that the Perfect English Pronunciation: British English course has been added to the Udemy for Business catalog, so I took it and recently completed it.

It is exactly what I wanted: around four hours of examples, with minimal pairs or trios of similar sounding words that you should learn how to speak differently and correctly. For once, I'll show a screenshot of one pair so you can judge by yourself:

Perfect English Pronunciation: British English course sample screenshot

The pairs/trios are also grouped by two big categories, vowel sounds and consonant sounds. You get plenty of examples, precise instructions on how to vocalize the sounds (including mouth close-ups), and a few repetitions of each.

Totally recommended.


When to Feature Flag new code

A few days ago a colleague asked me the question of "when I consider that new code/logic should go behind a feature flag". I gave a short answer but truth is that using feature flags, or feature toggles is very common these days and taken for granted you know how to use them, but it is not always explained when to use them.

What follows is my humble opinion and experience, feel free to ignore it.

The single, most important fact that is usually forgotten and perverted is that feature flags are meant to be temporal, they should always be removed once fully rolled out (or replaced by a system switch, more on this later).

When to use Feature Flags

  • New features. It is a fact that online complex systems will always have bugs, so having a flip switch to quickly "rollback" your new feature is a must. Also most non/trivial feature flag systems allow for gradual roll-outs, canary releases, role-based, entity-id based and other conditions.
  • Experimental features. To me, the same scenario as previous point.
  • Breaking changes. Many times should actually be something like a parallel change
  • Code that changes storage data (format or manipulation). Data corruption can be so dangerous that I'd consider it almost like a breaking change scenario.
  • Code that modifies input/output. Except if new I/O parameters/fields/things are optional.
  • Code that raises new potentially unhandled exceptions. Extending your validation logic with new scenarios? Cover with a FF the new validation until all upper layers/callers have been revised and handle the new errors.
  • Complex refactors or rewrites. If you have a really high code coverage might not need it, but we all know what usually happens with code coverage as codebases grow.
  • Untested or poorly tested code. Your change might look tiny, but tiny changes in complex systems can wreak havoc in unfortunate situations.
  • Code changes that might have potential performance hits. From adding a new data manipulation library to modifying an apparently innocuous ORM query, monitoring will alert you of performance degradations but if you can easily switch back to the old version while you ensure everything is fine, the better.
  • Simple A/B testing. As in "one control group and one experiment group".

When NOT to use Feature Flags

  • To disable system components. While the mechanics are the same, those uses are for System Switches, Ops Toggles or whatever you want to call them, but do not mix with feature flags, because it sends the wrong message of a feature flag being long term, when they shouldn't be.
  • For permanent versioned features. Example: If you have versioned API endpoints, the Feature flag should disappear as soon as the new version is stable and you should have routing-based versioning (or user-based or any other solution except a feature flag).
  • Complex A/B testing. There are better solutions than feature flag systems for A/B testing so use one instead.

Miscellaneous closing tips

Feature Flags are to be used by developers and product, while System Switches are for Systems, DevOps, SREs and the like.

When coding the feature flag check and forking logic, there are two accepted paths: The first one allows to just delete the lines in the future without any further changes:

if not feature_flags.enabled(feature_flags.constants.NEW_SHINY_FEATURE):
    # old path  (don't forget to return to avoid executing also new logic!)

# new logic

And the more commonly found, which just needs code re-indentation when removing the flag:

if feature_flags.enabled(feature_flags.constants.NEW_SHINY_FEATURE):
    # new logic
else:
    # old logic

If you want to know more about what are Feature Flags, this article is quite detailed.


Micro-Frontends (and self-contained systems)

The topic of the past weeks has been Micro-Frontends, due to this ThoughtWorks article about them (if you haven't read it, stop, go read it, then come back and continue). It has caused some discussions, mostly because there seems to be a trend now towards going mono-repo with Javascript, uniting all the things to build an easier path to update dependencies for all your FE pieces.

I know very little about current frontend development, tooling and best practices, so I'm not going to judge how good or bad is a mono-repo approach on itself. I just question how fast can it be, both from experience and personal intuition, because it means that yes, you update React once and every single Javascript piece of your system is updated, but at the same time we all know that updates are often harder and full of unforseen problems, so it could become a bottleneck as it is a somewhat all-or-nothing action.

Anyway, what I really wanted was to add to that really interesting article some related links around the topic, as in the past at TheMotion our idea was to move towards Self-Contained Systems, and at Ticketea the new checkout frontend was a "by the book" example of a micro-frontend (including iframe-based injection on the main website).

If you're interested, check the following resources:


Book Review: How to win friends and influence people

Review

How to win friends and influence people

Title: How to win friends and influence people

Author: Dale Carnegie

This is a book I've heard about many times and that more than one person recommended it to me in the past, so for a change from dev podcasts and similar for commuting I got the audio-book and just finished listening to it.

It is a curious, mixed-feelings book. One one side having a lot of good advices on varied topics, from how to deal with friendships and love, to business, to delivering hard news, to encouraging good behaviour from people that for example might be performing lower than they should... Being a revised edition I don't know if the original one had the concept of "organizational resilience", but at least some examples include the current impact of social media, Twitter, Facebook and the like, so while the original dates 1936 you won't notice it.

But on the other side, and in my personal opinion only, the book goes so much into being good with others and assuming good will and good intentions, that my personal experience rejected some points as being "way too positive". Not everything can be solved with a smile (or even deserves one), and while smiling and being considerate with others will surely help for example in negotiations, if you don't really feel it can backfire into making you appear fake and deceptive (speaking again of experience, when I've found some cases the effect has been worse).

Surely it has lots of examples of presidents and miscellaneous (many times famous) people doing amazing human acts, but I feel the book has a huge aura of "just be good with everyone" instead of more specific un-optimal scenarios and how to properly deal with them. I wasn't expecting something so full of examples and up to date as for example Managing Humans, but neither something so mystic and overly optimistic.

Still, interesting to read and containing some good advices. Just not as amazing as I expected.

Notes

I wish I could have taken more notes, but being an audio-book was hard to note anything but very concise points. Anyway, there are nice summaries online, like at Wikipedia.

  • generosity, trustiness
  • who you are and what you do makes others follow you
  • meaning before the medium
  • appreciation, honesty, sincere interest, empathy
  • increased touch points while losing touch
  • take the time to craft meaningful responses
  • try to avoid telling people they are wrong (including tones & gestures)
  • try to model/adapt those who are most influencing first to your ideas, then they'll spread it
  • ask questions instead of imposing orders
  • embrace failure/mistakes -> organizational resilience. 5 steps:
    1. acknowledge that failures happen
    2. encourage dialog to foster trust
    3. separate person from failure. project failed, not you failed
    4. learn from your mistakes
    5. create risk taking and fearless environment
  • magnify improvement
  • praise for good results, encouragement anytime (even when things go poorly)
  • make failures easy to correct

Hard to test scenarios

I will start this blog post with two examples of non-code "hard to test" scenarios (about sight) and their solutions:

  • How do blind people read when they cannot see? they touch the braille letters
  • How do you find an invisible man? You can spray paint in the room and cover him, or you use a infrared camera and spot him, or you can flood the flood with water so when he walks you see where he is

Instead of thinking "oh, I cannot see xxxxx, so I can't do it", the first scenario uses a different approach (a different sense), while the second one tries to transform the situation into one that fits our need (making the invisible man visible so we can use our sight to find him).

Now, applying this to software, in many cases hard to test things are due to code complexity, and I don't want to dwell into that as there are many articles and techniques on how to solve it, but sometimes there are scenarios that really seem hard to test or too fragile if tested traditionally.

I'm going to use an example that I read somewhere not long ago: Imagine you want to test some UI that gets rendered into an HTML5 canvas component. Assuming we have full control over the source code, meaning that we build the rendering and the associated Javascript that handles interactions with the canvas.

This scenario poses a bit of a challenge because, once rendered, the canvas is just a container of pixels (or a byte array), so if we have buttons and other interactive items, how we can test them?

Well, there are three types of tests that come into my mind one can do:

I) Unit tests

First of all, if you control generating the elements, you can build a lot of tests around element generation and composition of those elements, without actually needing to care about the rendering/UI.

As I prefer to not repeat things already explained way better, there is an Uncle Bob blog post about applying TDD to terrain generation which exemplifies how to test something as hard initially as a terrain generation algorithm, and the "trick" used is to focus not on the data generated but on the generators and actions. Terrain generation is deterministic, because oversimplifying, the only things that change are input parameters and the seed, so if you test well enough those, you can replicate any terrain inside a test (although the test could get big!).

As a real world example, at a previous job I had to build a server-side avatar generation platform, and wanted to add tests to it. By making the generation code a set of deterministic actions, it was really easy to test that, given actions A, B and C the result was a "female avatar wearing a cape with a chinese temple background". You can see a small example of how the PHP code looks at this slide.

II) Acceptance tests - method A

If you want to test replicating the scenario of a user browsing to your page with the canvas-based UI and interacting with it, using for example Sellenium, initially you're in trouble, as it is designed to work with standard HTML & CSS markup. It can navigate through nodes and/or CSS rules. You can also make it perform an action of moving to a certain coordinate and click (and we can easily adjust it to be inside the canvas as we at least know it's placement and size) so... how do we test it?

Just think about why Sellenium works: simply because the browser lets it know about the DOM elements and their CSS properties.

Well, we could then emit additional data useful for testing, because we control the whole process of generating the UI component. We could make each component inform of its coordinates where a user can click, so then we could make a test saying go to position (x, y) and click, then assert the webpage has navigated to the next screen.

This is nothing new, games have been doing this for game menus since long long ago (buttons are just rectangles that detect clicks on them), and moving forward, even Google Chrome does that with what they call hit test zones:

When the compositor thread sends an input event to the main thread, the first thing to run is a hit test to find the event target. Hit test uses paint records data that was generated in the rendering process to find out what is underneath the point coordinates in which the event occurred.

III) Acceptance tests - method B

(This method extends method A)

Another way to do tests is, assuming you control the source code of the Javascript event-handling code, combining the previous method of emitting coordinates data with making the event-handler code also emit events, like I've clicked coordinates (x, y). This combined with method A lead to being able to infer I've clicked button A or I've clicked the background of the canvas and have fine-control over events. This is how for example Windows handles messages and signals events over controls of programs, windows, etc. since ages.

This would allow for integration tests, where you don't need to actually render anything: You can generate the elements, feed them to a mock canvas, link it to the event-handler and test its outcomes simulating clicks at different coordinates (after all, a canvas is a list of bytes).

A problem that we had long ago at a previous job was that Sellenium tests were potentially slow and flaky when performing AJAX calls because the only way to detect those were "wait for element to be present" active waits, so we would be waiting more time than needed (to avoid waiting too few and then giving a false negative just because it took more but ended up showing). The frontend framework team solved this issue by making the JS MVC framework emit events like I've started action xxx, I've finished action xxx, and tweaking Sellenium to understand those and be able to respond much quicker to those AJAX loads.

Conclusion

I am far from an expert in testing, but I firmly believe that, given some time to think and effort to implement, most scenarios can be reduced to simpler ones that can be actually tested.

As a small shameless plug, while reading a book about maze generation algorithms, I decided to port (and extend) the examples to Python, and along the way did some testing of the basic building blocks: generating the maze grid with its cells, calculating neighbours of a cell and distances. The results are small yet practical tests, and afterwards I can focus on implementing the book algorithms and if something failed I can be certain it is not a bug on the building block, but on the algorithm being coded. Given time I could also now easily test each algorithm.


Previous entries