Kartones Blog

Be the change you want to see in this world

Course Review: TypeScript Essential Training (LinkedIn Learning)

Review

I decided to also do the TypeScript Essential Training course because the author is the same as Learning Typescript and recommended this one as a more complete and detailed dive-in. My review considers that same path and judges accordingly the contents of the present course.

This is an intermediate training of more than 4 hours, divided into 9 chapters. The first four ones are basic setups, a basic introduction of Typescript and typing, and quite a few ES2015 concepts. As such, if you already know some basics and/or are up to date with ES2015 or ES6, I suggest you to directly skip the first 4 chapters and start from the fifth one. Still, the remaining chapters cover in good detail not only main features but also more advanced topics like function overloads, abstract classes, generic constraints, namespaces, sourcemaps... The examples presented are more akin to real world scenarios and the sample application (a TODO webapp) is small yet appealing to see how to migrate and add typing.

The explanations of how Typescript compiles to Javascript certain features are sometimes very interesting, like namespaces being implemented using IIFEs or visibility access modifiers not really being enforced at runtime. As the course is from 2016, the ECMAScript version was yet ES2015, and even some features available on that version are not always present in the sample code, but in general I still found the contents useful.

All in all, I think I extracted quite a few learnings from the course.


Course Review: Learning Typescript (LinkedIn Learning)

Review

Slightly above an hour in duration, Learning Typescript will nicely guide you through the basic drills of setting up Typescript and adding it to an existing project, alongside explaining the basics: types, interfaces, classes, enums, literal types, a small introduction to generics and type declarations. It can’t go in depth but I was pleasantly surprised of how focused it is, perfect to get a general idea.

Oh, and it even manages to mention some cool tools and projects like DefinitelyTyped, for those third party libraries you might be using that don't provide type definitions yet.


Avoiding relative imports in Javascript, TypeScript, Webpack and Jest

A quick post to serve as a quick reference for myself on how to avoid/"remove" relative imports in big Javascript projects, monorepos, etcetera.

If you want some context on why this can be a good idea, check for example the article Importing with Absolute Paths using webpack in JavaScript/TypeScript.

A five seconds summary is:

Converting this unreadable and hard-to-maintain-without-an-IDE monster...

import { whatever } from '../../../../../../../shared/a-category/componentA';

...into something actually meaningful when read:

import { whatever } from '@sharedComponents/a-category/componentA';

The first step is to configure Webpack module aliases https://webpack.js.org/configuration/resolve/ and Babel aliases via the babel-plugin-module-resolver plugin.

Then, for Typescript we should configure path mappings https://www.typescriptlang.org/docs/handbook/module-resolution.html#path-mapping

And if you use Jest for tests, it also needs configuration to allow proper mocking of aliases: https://jestjs.io/docs/configuration#modulenamemapper-objectstring-string--arraystring


Don't avoid test randomness, embrace and control it

I was listening to a podcast the other day, and heard something that struck me as a kind of mistake, of avoiding the root cause of problems instead of trying to properly solve them. This is my personal point of view, but I'll try to justify it the best I can.

At the aforementioned podcast episode, the folks were talking about good testing practices, and the guest said: "one of the first things I do when I begin writing tests is Faker.seed(1234) so I don't get random errors". Faker is an amazing Python library that creates fake data of many kinds, perfect for your tests; From full names to full addresses and credit card numbers, and supporting different locales, the value it provides is sooo good. For a developer, you get a magical and easy to use collection of data factories, and to ease creating different users by default is totally random. On instantiation, unless you provide it with a seed value, it will choose a random one and generate different values on each test and on each run (even of the same test).

The argument given of "sticking" the seed to a fixed value was defended with the argument of getting deterministic/reproducible test runs (one of the speakers had a bug in a certain address field and took him a while to find it as the test only failed sometimes). Now, of course when you stick to the same data you get deterministic results, and if you're not using Faker at all I'd see it as a valid approach. But what Faker provides is abundance of varied test data. Now you won't only have a "John Doe" and a "Jane Doe", you can try names from different locales, and many different ones.

To me, the correct way of using Faker is to generate a random seed (per test, per test file, per build, whatever you prefer) and seed the generator with it, but also print the seed to stdout. This way, you get the best of both worlds:

  • Different runs use different data, so you have data with more entropy, so you get more chances of finding hidden bugs that fixed data won't
  • If a test fails due to the random data, you have the seed value in the output/test logs, so you can reproduce it

A good example of how this works is another Python library I like to use, pytest-randomly, which randomly reorders the test execution order, so you can detect nasty bugs due to shared state, cache issues, and the like. On each run, it'll output a line like the following:

Using --randomly-seed=3888935657

Sometimes making your test data static even makes the test suite more fragile and harder to evolve, because the more data it contains, the more the tests rely on specific values: e.g. always assuming user_id = 1 instead of always referencing your test user #1 (with whatever id it contains). Then you need to create another different user with different data, or go change dozens of hardcoded values, fixture json files, etcetera etcetera... in the end you end with a bunch of hardcoded test users, all of them with hardcoded data that you now also have to explicitly maintain. We now like and try to have configuration as code, so then why we need to do manual maintenance of test fixtures instead of relying as much as possible on auto-generated ones?

Ultimately, adding randomness to your tests doesn't adds flakiness, what it really does is uncover hidden bugs and helps you achieve a bit more antifragility. Instead of freezing your test data, add a bit of logging to ensure having enough information to deterministically reproduce any failure. Embrace and control randomness.


Course Review: Unconscious Bias (LinkedIn Learning)

Review

Unconscious Bias is a small but interesting ~20 minutes course about some biases that often get triggered unawaredly and we might not even know we have. We'll learn about 5 biases:

  • Affinity bias
  • Halo bias
  • Perception bias
  • Confirmation bias
  • Groupthink

It's a tiny but appealing teaching, could be longer and go more in depth to be better, but on the other hand is a quick learning pill.

Notes

To complement, I'll leave here some notes I had about the topic, mostly a list I gathered some time ago (sadly din't noted from where):

  • Anchoring Bias: People are over-reliant on first piece of information they hear.
  • Availability heuristic: People overestimate the importance of information that is available to them.
  • Bandwagon effect: The propbability of one person adopting a belief increases based on the number of people who holad that belief. (groupthink)
  • Blind-spot bias: Failing to recognize your own cognitive biases is a bias in itself. People notice cognitive and motivational biases much more in others than in themselves.
  • Choice-supportive bias: When you choose something, you tend to feld positive about it, even if that choice has flaws.
  • Clustering illusion: The tendency to see patterns in random events.
  • Confirmation bias: We tend to listen only to information that confirms our preconceptions.
  • Conservatism bias: Where people favor evidence or information that has emerged. (we are slow to accept)
  • Information bias: Tendency to seek information when it does not affect action. (more information is not always better)
  • Ostrich effect: Decision to ignore dangerous or negative information.
  • Outcome bias: Judging a decision based on the outcome, not on how the decision was made in the moment.
  • Overconfidence: Too confident about our abilities, which leads to taking greater risks.
  • Placebo effect: When simply believing that something will have a certain effect on you causes it to have that effect.
  • Pro-innovation bias: When a proponent of an innovation tends to overvalue its usefulness and undervalue its limitations.
  • Recency: Tendency to weight the latest information more heavily than older data.
  • Salience: Tendency to focus on the most easily recognizable features.
  • Selective perception: Allowing expectations to influence how we percieve the world.
  • Stereotyping: Expecting a group or person to have certain qualities without having real information about the person. (we tend to overuse and abuse it)
  • Survivorship bias: Focusing only on surviving examples, causing us to misjudge a situation.
  • Zero-risk bias: We love certainty, even if it's counterproductive.

And also I'll leave a link to Wikipedia's list of cognitive biases, which includes a lot more.


Previous entries