Kartones Blog

Be the change you want to see in this world

Avoiding relative imports in Javascript, TypeScript, Webpack and Jest

A quick post to serve as a quick reference for myself on how to avoid/"remove" relative imports in big Javascript projects, monorepos, etcetera.

If you want some context on why this can be a good idea, check for example the article Importing with Absolute Paths using webpack in JavaScript/TypeScript.

A five seconds summary is:

Converting this unreadable and hard-to-maintain-without-an-IDE monster...

import { whatever } from '../../../../../../../shared/a-category/componentA';

...into something actually meaningful when read:

import { whatever } from '@sharedComponents/a-category/componentA';

The first step is to configure Webpack module aliases and Babel aliases via the babel-plugin-module-resolver plugin.

Then, for Typescript we should configure path mappings

And if you use Jest for tests, it also needs some configuration to allow proper mocking of aliases.

Don't avoid test randomness, embrace and control it

I was listening to a podcast the other day, and heard something that struck me as a kind of mistake, of avoiding the root cause of problems instead of trying to properly solve them. This is my personal point of view, but I'll try to justify it the best I can.

At the aforementioned podcast episode, the folks were talking about good testing practices, and the guest said: "one of the first things I do when I begin writing tests is Faker.seed(1234) so I don't get random errors". Faker is an amazing Python library that creates fake data of many kinds, perfect for your tests; From full names to full addresses and credit card numbers, and supporting different locales, the value it provides is sooo good. For a developer, you get a magical and easy to use collection of data factories, and to ease creating different users by default is totally random. On instantiation, unless you provide it with a seed value, it will choose a random one and generate different values on each test and on each run (even of the same test).

The argument given of "sticking" the seed to a fixed value was defended with the argument of getting deterministic/reproducible test runs (one of the speakers had a bug in a certain address field and took him a while to find it as the test only failed sometimes). Now, of course when you stick to the same data you get deterministic results, and if you're not using Faker at all I'd see it as a valid approach. But what Faker provides is abundance of varied test data. Now you won't only have a "John Doe" and a "Jane Doe", you can try names from different locales, and many different ones.

To me, the correct way of using Faker is to generate a random seed (per test, per test file, per build, whatever you prefer) and seed the generator with it, but also print the seed to stdout. This way, you get the best of both worlds:

  • Different runs use different data, so you have data with more entropy, so you get more chances of finding hidden bugs that fixed data won't
  • If a test fails due to the random data, you have the seed value in the output/test logs, so you can reproduce it

A good example of how this works is another Python library I like to use, pytest-randomly, which randomly reorders the test execution order, so you can detect nasty bugs due to shared state, cache issues, and the like. On each run, it'll output a line like the following:

Using --randomly-seed=3888935657

Sometimes making your test data static even makes the test suite more fragile and harder to evolve, because the more data it contains, the more the tests rely on specific values: e.g. always assuming user_id = 1 instead of always referencing your test user #1 (with whatever id it contains). Then you need to create another different user with different data, or go change dozens of hardcoded values, fixture json files, etcetera etcetera... in the end you end with a bunch of hardcoded test users, all of them with hardcoded data that you now also have to explicitly maintain. We now like and try to have configuration as code, so then why we need to do manual maintenance of test fixtures instead of relying as much as possible on auto-generated ones?

Ultimately, adding randomness to your tests doesn't adds flakiness, what it really does is uncover hidden bugs and helps you achieve a bit more antifragility. Instead of freezing your test data, add a bit of logging to ensure having enough information to deterministically reproduce any failure. Embrace and control randomness.

Course Review: Unconscious Bias (LinkedIn Learning)


Unconscious Bias is a small but interesting ~20 minutes course about some biases that often get triggered unawaredly and we might not even know we have. We'll learn about 5 biases:

  • Affinity bias
  • Halo bias
  • Perception bias
  • Confirmation bias
  • Groupthink

It's a tiny but appealing teaching, could be longer and go more in depth to be better, but on the other hand is a quick learning pill.


To complement, I'll leave here some notes I had about the topic, mostly a list I gathered some time ago (sadly din't noted from where):

  • Anchoring Bias: People are over-reliant on first piece of information they hear.
  • Availability heuristic: People overestimate the importance of information that is available to them.
  • Bandwagon effect: The propbability of one person adopting a belief increases based on the number of people who holad that belief. (groupthink)
  • Blind-spot bias: Failing to recognize your own cognitive biases is a bias in itself. People notice cognitive and motivational biases much more in others than in themselves.
  • Choice-supportive bias: When you choose something, you tend to feld positive about it, even if that choice has flaws.
  • Clustering illusion: The tendency to see patterns in random events.
  • Confirmation bias: We tend to listen only to information that confirms our preconceptions.
  • Conservatism bias: Where people favor evidence or information that has emerged. (we are slow to accept)
  • Information bias: Tendency to seek information when it does not affect action. (more information is not always better)
  • Ostrich effect: Decision to ignore dangerous or negative information.
  • Outcome bias: Judging a decision based on the outcome, not on how the decision was made in the moment.
  • Overconfidence: Too confident about our abilities, which leads to taking greater risks.
  • Placebo effect: When simply believing that something will have a certain effect on you causes it to have that effect.
  • Pro-innovation bias: When a proponent of an innovation tends to overvalue its usefulness and undervalue its limitations.
  • Recency: Tendency to weight the latest information more heavily than older data.
  • Salience: Tendency to focus on the most easily recognizable features.
  • Selective perception: Allowing expectations to influence how we percieve the world.
  • Stereotyping: Expecting a group or person to have certain qualities without having real information about the person. (we tend to overuse and abuse it)
  • Survivorship bias: Focusing only on surviving examples, causing us to misjudge a situation.
  • Zero-risk bias: We love certainty, even if it's counterproductive.

And also I'll leave a link to Wikipedia's list of cognitive biases, which includes a lot more.

On Learning React

Among other technologies, I'm getting up to speed with the latest and greatest of React. Although nowadays most official webpages have incredible tutorials (e.g. Typescript, React or Jest), I both consume a decent amount of audio content and like to have multiple sources of information, so I asked some friends, did some searches and checked some recommendations. As happens a lot lately, sometimes it's hard to distinguish commercial interest or pure feature bloating from real requirements for building Javascript apps, and it happens even more so with React. I'm sometimes overflowing with do many theoretical frameworks and libraries and tools and "extensions" and whatnot, all of them according to some sources you should learn and use. But I need to allocate time for other stuff, so is not always easy to identify and select what you feel will provide you with most value for the time invested (and sometimes money, but that's irrelevant if the content is good).

That said, my tiny list of 3rd party resources that I currently read/listen/watch and would recommend related with React (and I could also say "Javascript") is:

  • Wes Bos: Not only I think his courses are really well done, but also some of them even free! His blog is not updated often, though.
  • Tania Rascia: I found Tania's excellent React-related posts while searching for good tutorials apart from the official ReactJS.org one, and I decided to subscribe via RSS.
  • Syntax: From Wes Bos and Scott Tolinski, on an always fun tone but touching all kinds of Javascript topics, some beginner level (which I'm glad for right now), others more advanced.
  • Javascript Jabber: A friend suggestion and recent addition, I like that the panel of podcasters is varied (plus one or more guests) so there are usually different points of view, plus the topics are in general also varied and interesting.
  • JS Party: I already am a long-time listener from their sister-podcast The Changelog, so glad to see the same quality of episodes and variety of topics and interviewees.

I follow a few more blogs and podcasts, but haven't yet made my mind about if they are great so as to keep following or just good, but I'd appreciate suggestions of any other resources to learn React and modern Javascript you think are good. So if you have any, please write them to me!

Book Review: Stairway to Badass: The Making and Remaking of Doom


Stairway to Badass: The Making and Remaking of Doom book cover

Title: Stairway to Badass: The Making and Remaking of Doom

Author(s): David L. Craddock

In theory, half of this book is about Doom, and half about Doom 2016. In reality, you first have to remove the trailing 20% that is just dull content to fill pages (a preview from Rocket jump Quake book and a "making of" of this book?!). Then, Doom 2016 gets around 1/3 of the pages, and half of those are "not bad but not incredibly interesting" interviews, including one with a speedrunner. What remains covers classic Doom, but so much of it relates to modding, map making and speedrunning, that if you really want to know anything relevant about how the game was built, you're better off with alternatives like Masters of Doom or the Black Book about Doom.

I have really enjoyed reading other books from this author, but this title feels mediocre. An interview or two with people creating content for the original game so many years later is interesting, but when it makes not only most of the content but most of the actual interesting or fresh content, there's something amiss. Another example is the heavy presence of John Romero's Sigil expansion for classic Doom: Sure it's cool (I played it and the maps are great and well done), but one chapter about explaining what it is, another in the form of an interview with Romero, another with the playtester, and another with the company building the physical version?

There's some tiny highlights here and there, but throwing a complex technical sentence in the middle of pages of undesired content doesn't makes a book deep or interesting, and overall it's simply not worth reading through. If this was a book about Doom's community and modding, it'd be fine, but the title is totally misleading.

I really didn't knew what to expect before starting reading this book, but disappointment wasn't in the list.

Previous entries