A quick post to serve as a quick reference for myself on how to avoid/"remove" relative imports in big Javascript projects, monorepos, etcetera.
If you want some context on why this can be a good idea, check for example the article Importing with Absolute Paths using webpack in JavaScript/TypeScript.
A five seconds summary is:
Converting this unreadable and hard-to-maintain-without-an-IDE monster...
import { whatever } from '../../../../../../../shared/a-category/componentA';
...into something actually meaningful when read:
import { whatever } from '@sharedComponents/a-category/componentA';
The first step is to configure Webpack module aliases and Babel aliases via the babel-plugin-module-resolver plugin.
Then, for Typescript we should configure path mappings
And if you use Jest for tests, it also needs some configuration to allow proper mocking of aliases.
I was listening to a podcast the other day, and heard something that struck me as a kind of mistake, of avoiding the root cause of problems instead of trying to properly solve them. This is my personal point of view, but I'll try to justify it the best I can.
At the aforementioned podcast episode, the folks were talking about good testing practices, and the guest said: "one of the first things I do when I begin writing tests is Faker.seed(1234)
so I don't get random errors". Faker is an amazing Python library that creates fake data of many kinds, perfect for your tests; From full names to full addresses and credit card numbers, and supporting different locales, the value it provides is sooo good. For a developer, you get a magical and easy to use collection of data factories, and to ease creating different users by default is totally random. On instantiation, unless you provide it with a seed value, it will choose a random one and generate different values on each test and on each run (even of the same test).
The argument given of "sticking" the seed to a fixed value was defended with the argument of getting deterministic/reproducible test runs (one of the speakers had a bug in a certain address field and took him a while to find it as the test only failed sometimes). Now, of course when you stick to the same data you get deterministic results, and if you're not using Faker at all I'd see it as a valid approach. But what Faker provides is abundance of varied test data. Now you won't only have a "John Doe" and a "Jane Doe", you can try names from different locales, and many different ones.
To me, the correct way of using Faker is to generate a random seed (per test, per test file, per build, whatever you prefer) and seed the generator with it, but also print the seed to stdout. This way, you get the best of both worlds:
A good example of how this works is another Python library I like to use, pytest-randomly, which randomly reorders the test execution order, so you can detect nasty bugs due to shared state, cache issues, and the like. On each run, it'll output a line like the following:
Using --randomly-seed=3888935657
Sometimes making your test data static even makes the test suite more fragile and harder to evolve, because the more data it contains, the more the tests rely on specific values: e.g. always assuming user_id = 1
instead of always referencing your test user #1 (with whatever id it contains). Then you need to create another different user with different data, or go change dozens of hardcoded values, fixture json files, etcetera etcetera... in the end you end with a bunch of hardcoded test users, all of them with hardcoded data that you now also have to explicitly maintain. We now like and try to have configuration as code, so then why we need to do manual maintenance of test fixtures instead of relying as much as possible on auto-generated ones?
Ultimately, adding randomness to your tests doesn't adds flakiness, what it really does is uncover hidden bugs and helps you achieve a bit more antifragility. Instead of freezing your test data, add a bit of logging to ensure having enough information to deterministically reproduce any failure. Embrace and control randomness.
Unconscious Bias is a small but interesting ~20 minutes course about some biases that often get triggered unawaredly and we might not even know we have. We'll learn about 5 biases:
It's a tiny but appealing teaching, could be longer and go more in depth to be better, but on the other hand is a quick learning pill.
To complement, I'll leave here some notes I had about the topic, mostly a list I gathered some time ago (sadly din't noted from where):
And also I'll leave a link to Wikipedia's list of cognitive biases, which includes a lot more.
Among other technologies, I'm getting up to speed with the latest and greatest of React. Although nowadays most official webpages have incredible tutorials (e.g. Typescript, React or Jest), I both consume a decent amount of audio content and like to have multiple sources of information, so I asked some friends, did some searches and checked some recommendations. As happens a lot lately, sometimes it's hard to distinguish commercial interest or pure feature bloating from real requirements for building Javascript apps, and it happens even more so with React. I'm sometimes overflowing with do many theoretical frameworks and libraries and tools and "extensions" and whatnot, all of them according to some sources you should learn and use. But I need to allocate time for other stuff, so is not always easy to identify and select what you feel will provide you with most value for the time invested (and sometimes money, but that's irrelevant if the content is good).
That said, my tiny list of 3rd party resources that I currently read/listen/watch and would recommend related with React (and I could also say "Javascript") is:
I follow a few more blogs and podcasts, but haven't yet made my mind about if they are great so as to keep following or just good, but I'd appreciate suggestions of any other resources to learn React and modern Javascript you think are good. So if you have any, please write them to me!
Title: Stairway to Badass: The Making and Remaking of Doom
Author(s): David L. Craddock
In theory, half of this book is about Doom, and half about Doom 2016. In reality, you first have to remove the trailing 20% that is just dull content to fill pages (a preview from Rocket jump Quake book and a "making of" of this book?!). Then, Doom 2016 gets around 1/3 of the pages, and half of those are "not bad but not incredibly interesting" interviews, including one with a speedrunner. What remains covers classic Doom, but so much of it relates to modding, map making and speedrunning, that if you really want to know anything relevant about how the game was built, you're better off with alternatives like Masters of Doom or the Black Book about Doom.
I have really enjoyed reading other books from this author, but this title feels mediocre. An interview or two with people creating content for the original game so many years later is interesting, but when it makes not only most of the content but most of the actual interesting or fresh content, there's something amiss. Another example is the heavy presence of John Romero's Sigil expansion for classic Doom: Sure it's cool (I played it and the maps are great and well done), but one chapter about explaining what it is, another in the form of an interview with Romero, another with the playtester, and another with the company building the physical version?
There's some tiny highlights here and there, but throwing a complex technical sentence in the middle of pages of undesired content doesn't makes a book deep or interesting, and overall it's simply not worth reading through. If this was a book about Doom's community and modding, it'd be fine, but the title is totally misleading.
I really didn't knew what to expect before starting reading this book, but disappointment wasn't in the list.