I was searching for a course about improving pronunciation, when I found that the Perfect English Pronunciation: British English course has been added to the Udemy for Business catalog, so I took it and recently completed it.
It is exactly what I wanted: around four hours of examples, with minimal pairs or trios of similar sounding words that you should learn how to speak differently and correctly. For once, I'll show a screenshot of one vocals pair so you can judge by yourself:
The pairs/trios are also grouped by two big categories, vowel sounds and consonant sounds. You get plenty of examples, precise instructions on how to vocalize the sounds (including mouth close-ups), and a few repetitions of each.
A few days ago a colleague asked me the question of "when I consider that new code/logic should go behind a feature flag". I gave a short answer but truth is that using feature flags, or feature toggles is very common these days and taken for granted you know how to use them, but it is not always explained when to use them.
What follows is my humble opinion and experience, feel free to ignore it.
The single, most important fact that is usually forgotten and perverted is that feature flags are meant to be temporal, they should always be removed once fully rolled out (or replaced by a system switch, more on this later).
Feature Flags are to be used by developers and product, while System Switches are for Systems, DevOps, SREs and the like.
When coding the feature flag check and forking logic, there are two accepted paths: The first one allows to just delete the lines in the future without any further changes:
if not feature_flags.enabled(feature_flags.constants.NEW_SHINY_FEATURE): # old path (don't forget to return to avoid executing also new logic!) # new logic
And the more commonly found, which just needs code re-indentation when removing the flag:
if feature_flags.enabled(feature_flags.constants.NEW_SHINY_FEATURE): # new logic else: # old logic
If you want to know more about what are Feature Flags, this article is quite detailed.
Anyway, what I really wanted was to add to that really interesting article some related links around the topic, as in the past at TheMotion our idea was to move towards Self-Contained Systems, and at Ticketea the new checkout frontend was a "by the book" example of a micro-frontend (including
iframe-based injection on the main website).
If you're interested, check the following resources:
Author: Dale Carnegie
This is a book I've heard about many times and that more than one person recommended it to me in the past, so for a change from dev podcasts and similar for commuting I got the audio-book and just finished listening to it.
It is a curious, mixed-feelings book. One one side having a lot of good advices on varied topics, from how to deal with friendships and love, to business, to delivering hard news, to encouraging good behaviour from people that for example might be performing lower than they should... Being a revised edition I don't know if the original one had the concept of "organizational resilience", but at least some examples include the current impact of social media, Twitter, Facebook and the like, so while the original dates 1936 you won't notice it.
But on the other side, and in my personal opinion only, the book goes so much into being good with others and assuming good will and good intentions, that my personal experience rejected some points as being "way too positive". Not everything can be solved with a smile (or even deserves one), and while smiling and being considerate with others will surely help for example in negotiations, if you don't really feel it can backfire into making you appear fake and deceptive (speaking again of experience, when I've found some cases the effect has been worse).
Surely it has lots of examples of presidents and miscellaneous (many times famous) people doing amazing human acts, but I feel the book has a huge aura of "just be good with everyone" instead of more specific un-optimal scenarios and how to properly deal with them. I wasn't expecting something so full of examples and up to date as for example Managing Humans, but neither something so mystic and overly optimistic.
Still, interesting to read and containing some good advices. Just not as amazing as I expected.
I wish I could have taken more notes, but being an audio-book was hard to note anything but very concise points. Anyway, there are nice summaries online, like at Wikipedia.
I will start this blog post with two examples of non-code "hard to test" scenarios (about sight) and their solutions:
Instead of thinking "oh, I cannot see xxxxx, so I can't do it", the first scenario uses a different approach (a different sense), while the second one tries to transform the situation into one that fits our need (making the invisible man visible so we can use our sight to find him).
Now, applying this to software, in many cases hard to test things are due to code complexity, and I don't want to dwell into that as there are many articles and techniques on how to solve it, but sometimes there are scenarios that really seem hard to test or too fragile if tested traditionally.
This scenario poses a bit of a challenge because, once rendered, the canvas is just a container of pixels (or a byte array), so if we have buttons and other interactive items, how we can test them?
Well, there are three types of tests that come into my mind one can do:
First of all, if you control generating the elements, you can build a lot of tests around element generation and composition of those elements, without actually needing to care about the rendering/UI.
As I prefer to not repeat things already explained way better, there is an Uncle Bob blog post about applying TDD to terrain generation which exemplifies how to test something as hard initially as a terrain generation algorithm, and the "trick" used is to focus not on the data generated but on the generators and actions. Terrain generation is deterministic, because oversimplifying, the only things that change are input parameters and the seed, so if you test well enough those, you can replicate any terrain inside a test (although the test could get big!).
As a real world example, at a previous job I had to build a server-side avatar generation platform, and wanted to add tests to it. By making the generation code a set of deterministic actions, it was really easy to test that, given actions A, B and C the result was a "female avatar wearing a cape with a chinese temple background". You can see a small example of how the PHP code looks at this slide.
If you want to test replicating the scenario of a user browsing to your page with the canvas-based UI and interacting with it, using for example Sellenium, initially you're in trouble, as it is designed to work with standard HTML & CSS markup. It can navigate through nodes and/or CSS rules. You can also make it perform an action of moving to a certain coordinate and click (and we can easily adjust it to be inside the canvas as we at least know it's placement and size) so... how do we test it?
Just think about why Sellenium works: simply because the browser lets it know about the DOM elements and their CSS properties.
Well, we could then emit additional data useful for testing, because we control the whole process of generating the UI component. We could make each component inform of its coordinates where a user can click, so then we could make a test saying
go to position (x, y) and click, then assert the webpage has navigated to the next screen.
This is nothing new, games have been doing this for game menus since long long ago (buttons are just rectangles that detect clicks on them), and moving forward, even Google Chrome does that with what they call hit test zones:
When the compositor thread sends an input event to the main thread, the first thing to run is a hit test to find the event target. Hit test uses paint records data that was generated in the rendering process to find out what is underneath the point coordinates in which the event occurred.
(This method extends method A)
I've clicked coordinates (x, y). This combined with method A lead to being able to infer
I've clicked button A or
I've clicked the background of the canvas and have fine-control over events. This is how for example Windows handles messages and signals events over controls of programs, windows, etc. since ages.
This would allow for integration tests, where you don't need to actually render anything: You can generate the elements, feed them to a mock canvas, link it to the event-handler and test its outcomes simulating clicks at different coordinates (after all, a canvas is a list of bytes).
A problem that we had long ago at a previous job was that Sellenium tests were potentially slow and flaky when performing AJAX calls because the only way to detect those were "wait for element to be present" active waits, so we would be waiting more time than needed (to avoid waiting too few and then giving a false negative just because it took more but ended up showing). The frontend framework team solved this issue by making the JS MVC framework emit events like
I've started action xxx,
I've finished action xxx, and tweaking Sellenium to understand those and be able to respond much quicker to those AJAX loads.
I am far from an expert in testing, but I firmly believe that, given some time to think and effort to implement, most scenarios can be reduced to simpler ones that can be actually tested.
As a small shameless plug, while reading a book about maze generation algorithms, I decided to port (and extend) the examples to Python, and along the way did some testing of the basic building blocks: generating the maze grid with its cells, calculating neighbours of a cell and distances. The results are small yet practical tests, and afterwards I can focus on implementing the book algorithms and if something failed I can be certain it is not a bug on the building block, but on the algorithm being coded. Given time I could also now easily test each algorithm.