Kartones Blog

Be the change you wanna see in this world

Book Review: Infinite Game Universe: Mathematical Techniques

Review

Infinite Game Universe: Mathematical Techniques book cover

Title: Infinite Game Universe: Mathematical Techniques

Author: Guy W. Lecky-Thompson

I don't even remember since when I had this 2001 book, only that I bought it second hand because I read it was really interesting and procedural generation of content is something that always has intrigued me. Not having found a digital version or scan anywhere, I decided to read it in paper before gifting it to somebody.

It's a ~300 pages hardcover book with a CD-ROM, which sadly was in such bad shape I couldn't read it, although I'm probably not going to miss the source codes as I've been able to code my versions of some algorithms and examples.

As the title hints, most of the book is about procedural generation of content for videogame universes. No matter if we're talking about planet or character names, lore texts, polygonal trees, complex shapes like a human face or a full universe composed of galaxies, stars and planets, we're taught multiple techniques to generate random numbers and, most important, sequences of those numbers. We're also explained how controlling the values of the numbers is as important as at least knowing that any non-purely random algorithm will tend to reach a pattern, where numbers will start repeating. Combined with statistics, probability and different algorithms, we'll learn to generate content for our games and even improve the storage of the data (e.g. we're explained how the RLE compression algorithm works), and what makes fractals interesting for pseudo-random number generators.

In general the book is well explained, detailed and with abundant examples, some carried over through multiple chapters, up to the point that a few chapters contain almost no theory and instead two or three applications of the topic with the examples (images included). I like it because it helps consolidate what you're learning, but at the same time you can skip a few pages if you already understood it. A big chunk of the examples encompass adjacency tables (via words), line-art (via tree behaviour) and (coordinate systems (via images) as sources of data. Also the final block of chapters are a "full" example of a space exploration game, applying a few of the concepts explained before. It really doesn't apply everything (at least in the book, maybe the source code did) but still makes up for a decent example of object serial numbers and seeding.

Not having the original source code CD I had to reimplement in Python (instead of the book's C) some of the formulas and found a typo or two (my book it's a 1st edition), but where pretty simple to correct.

While most of the contents are clear to understand, a few of the examples are not very detailed and confusing on how to achieve them; for example, after talking about affine transformations we see some formulas and an image of a generated fern plant. We see two formulas and a table of affine transformation values, but no clue of how do you actually paint it, because the description of how the affine transformations are applied is incomplete. In the end it is just explaining Barnsley Fern fractals but a) curiously without giving the exact name and b) not properly explaining the formula and steps, just throwing a bunch of formulas and values and saying "if you input this and feed last values arrive to this fern drawing". It feels nice to learn not only how they are drawn but why they are so cool, but I should have to go to Wikipedia when in theory I have a bunch of pages detailing it.

My Barnsley fern

I felt that the whole block about fractals contains much theory and not so much practical use, feels a bit incomplete. Talks here and there about subjects but without almost any specific example, like "Pac-man ghosts can share same movement behaviour" (of course, but how does this relates with fractals?). We are indeed shown how to modify object adjacency tables with a Mandelbrot fractal, but I feel there are too many pages on the topic for such few applicable results.

Despite my small complaints, overall a great resource for learning the basics of procedural content generation, and it opened my mind of how important is choosing the data set and/or the algorithms, and that even "bad" things (like repeating patterns) can be controlled and put to good use.


English Course Reviews - February 2020

In order to stop spamming with so many English course reviews (2 weekly hours ain't bad), I've decided to aggregate as a single monthly post all of my reviews of finished courses.



50 English Phrases, Idioms, and Expressions for ESL Students (Udemy course): Free course, around 1h long. Very nicely split in either a single or a pair of sentences per lesson, so you can ingest it either in short bursts or in a single session. Well organized and recorded, and I gladly found a few new idioms I didn't knew, so considering the cost, more than recommended.


English for IT Professionals (Udemy course): Quality and audio volume varies with each lesson, nothing terrible but it is annoying to have to be turning up and down the volume between different videos, or seeing Mac brightness controls pop-up, etc.

It has a good list of terms, vocabulary and even a few IT slang sentences (e.g. "blamestorming" or "hamster wheel"), plus the grammar part is always exemplified with IT-related topics and sentences. It is interesting but it felt to me lacking more content.


Business English: Easy English for Meetings (Udemy course): This course has abundant speaking mistakes, of which I'm normally not against, but here you will notice them quite often. Also, when you provide a "sample text" and say "I'm going to read it", you expect to exactly read it, not summarize it. And finally, slides go way too fast, when the teacher wants you to read them just mentions it and moves to the next slide, so it's hard to even pause before it goes away.

Despite those complaints about things that could be improved, it is an interesting and topic-focused albeit short course.



Why the 10th Man Rule is relevant

World War Z is an amazing zombies book with a terrible and unique movie adaptation. As far as I know, it is the only zombies film not featuring a single drop of blood or seen zombie bite, I guess because they spent all the budget on both Brad Pitt's salary and the marketing campaign. But there are learnings even in bad things, and this scenario is no less. In one scene, a character describes the 10th Man Rule (also known as devil’s advocate), which proposes:

Produce a range of explanations and assessments of events that avoid relying on a single concept [...].
If ten people are in a room, and nine agree on how to interpret and respond to a situation, the tenth man must disagree. His duty is to find the best possible argument for why the decision of the group is flawed.

Now, this rule can be real or can be fictional, but anyway I like it and sometimes apply it. Usually in the context of everyone saying the pros and I focusing on the cons, but the opposite can also happen.

I think this rule is relevant in discussions because in a perfect world we would gather different experts and stakeholders and spend as much as needed thinking and talking about approaches, choices and trade-offs about any proposal or technical requirements specification, but the reality usually goes more along the lines of:

  • If the proposal comes from Product, usually speaks little of trade-offs and difficulties (other than listing existing ones as justification for the proposal itself)
  • If the proposal or tech spec comes from Engineering, my general feeling is that, with all this "perversion of agile" we can now mostly say "I wanna do X" and just jump into building it, losing that phase of properly researching or simply giving some thoughts to how to achieve your goal; what I sometimes call "think before you code". I am not a fan of endless documentation, but writing a tech spec forces you to think about your action plan at least once [1].

Circling back to the main point, we can say we usually attend a meeting or review a proposal which many times lacks depth. Now, ignorance is bliss they say, and sometimes that is good because if you don't know you can't do X because it's impossible, you might find a way to actually do it [2]. But often those meetings are either echo chamber or an environment where there's some aversion to voice concerns (to "say no"). I'm pretty sure everyone has at least once been in a situation of being faced with everyone else thinking "A is best" while we think "Are you all blind? B is clearly best!".

The purpose of this rule is not be an excuse to become the grumpy person who always points out the flaws, but to enable discussions that challenge the main approach and avoid group thinking (which usually impedes individual thinking). Note that this does not mean you can be disrespectful or impolite, of course.

It is nice to do positive thinking and generally seek a "how yes", but we must not forget to also think about the "why not" at some point.


[1]: I view it the same as explaining a subject to somebody else helps you study it. Forcing to explain or detail something enforces retaining what you learn.

[2]: Wasn't some quote along the lines of "Nobody told me it was impossible, so I did it"?


Let's talk about idempotency

Let's have a brief talk about idempotency applied to computer science.

Definitions

Wikipedia's definition of idempotence says a function is idempotent if the system state remains the same after one or several calls.

A Stripe blog post nicely summarizes that an idempotent API or service endpoint can be called any number of times while guaranteeing that side effects only occur once.

Idempotency then is a mechanism to help us with: - Makes retries safe, as else a retry can make things worse and cause a cascading effect - Helps dealing with duplicates. Generalizing previous point, applied to messaging and/or service interactions, it allows to get most of the benefits of an Exactly-once message delivery in a world of At-least-once. - Helps achieving certain data integrity without relying on distributed transactions, two-phase locking and other mechanisms which are more reliable but also incur in performance penalties

How

Definitions are fine, but do not talk about how to achieve idempotency. The main pillar is the idempotency-key, a way to identify a change request so that we can detect repetitions/duplications. The key might be something really simple, like a UUID the caller generates and sets as a custom header (e.g. Idempotency-Key for Stripe calls). It could be a hash of the sent data. It could also be a few fields we decide are relevant.

We'll focus on a non-trivial scenario: We have a service that can receive calls from multiple sources/clients and that internally keeps a FSM (Finite State Machine). We want to protect this service so that only processes once a request to "transition from state A to B". This isn't trivial because is very easy to protect against the same caller repeating the requests (e.g. it caller has implemented a retry system), but harder as you somehow need to differentiate when client X performs a request to transition from A to B, and then arrives a request from client Y with the same petition: transition from A to B.

We could start by defining our idempotency-key as simply <target_entity_id> + <new_state>, and that would work for simple scenarios, but what happens if our states graph allows multiple ways of reaching state B? Then comes my suggestion: If you notice, I didn't say transition to B but transition from A to B. If our idempotency-key is for example <target_entity_id> + <new_state> + <current_state>, we can now easily differentiate transitions A -> B and D -> B without problems.

And now, what do we do with this idempotency-key? we simply use it to keep track of recent calls: - if the key is not present in our idempotency storage (Redis or in-memory are common, but any storage you can imagine is fine), we perform action and cache the output at our idempotency storage (pretty much like a cache) - if the key is present, we return the cached results and do not execute anything [1]

We shouldn't keep cached responses forever, right? This is why Redis or a similar in-memory cache is such a good fit: You just set a decent TTL for the idempotency items you store, and forget about it. The value depends: for sensitive operations like a purchase I've set it in the past to one hour, but could be extended for long-running processes or batch jobs, or kept very short (e.g. a few seconds for a delete operation).


There's one remaining subtlety: what do we do if our system is designed in such a way that there can be concurrent requests to the same service (trivial scenario: you have multiple instances of it)? What if we have a slow endpoint and we get a second request to transition from A to B meanwhile the first one is executing?

Here it is true that idempotency fails a bit to fully help us, because it will only work flawlessly in one of the following cases: - you have a single service instance (or single point of entry for processing actions/requests) - you have an action queue, buffer, etc., so again, actions are processed sequentially - you only care about repeated requests from the same caller (like Stripe's idempotency key implementation as a unique hash id)

If we want to support concurrent execution of idempotent requests, we probably need some request management mechanism, to detect executing (yet incompleted) requests and apply some sort of strategy to them: - wait until original finishes? - re-enqueue the request at the tail? - return an HTTP 307 redirect or a 409 conflict?

We can incorporate this detection to the idempotency middleware/component: instead of just storing the response status and data, by including also if it's finished or ongoing (personal advice, if ongoing set a small TTL); or we can have a separate requests log (just keeping which ones have finished and which ones are ongoing); we could even implement most of the idempotency management at NGinx level with some LUA scripting, although here I advise caution because caching non-GETs is a dangerous path and you must be very careful discerning which HTTP headers to take into account as part of the idempotency "key".

Something along the lines of:

# not present
return None

# ongoing request
return {
    "status": "ongoing"
}

# finished request (non-error)
return {
    "status": "finished",
    "http_status": 200,
    "response": { ... }
}

# finished request (error)
return {
    "status" "finished",
    "http_status": 400,
    "response: None
}

Alternatives

Our previous scenario is one that, with some convention and agreement over the data sent, or via flexible configuration options, can be built for example as a Django REST Framework decorator.

A post from NParticularBus includes a handy list of some alternate approaches:

  • Message de-duplication: Explained above when done inside services. When talking about message-based communication means literally detecting repeated requests in a short time span and removing the duplicates.
  • Natural idempotency: Coding your logic to be as idempotent as possible. This is always desired and can be done for individual entities with some effort, but with complex services really hard to achieve in the upper layers.
  • Entities and messages with version information: If the data can provide some kind of version number, we can say I want to update data from entity id XXX being at version 3, then if backend detects current stored version is no longer 3 (because something else updated it), you can fail the change request. This has the drawback of needing extra communication, as the change request emitter would need to query for current data and try to apply the modifications again.
  • Side effect checks: A bit naive approach when talking complex systems, being able to detect if the side effect is already present (if our service is already in state B don't execute transition from A to B) is something you ought to be already doing.
  • Partner state machines: Having a single source that can issue a change request allows to control execution (and narrow it to exactly-once), but also creates a single point of failure and for existing complex systems might not be so easy to achieve.
  • Accept uncertainty: Embracing chaos is always an option, but one that usually doesn't ends up well 😆.

Wrapping up

Pure, raw idempotency is hard to achieve in a complex real world system. If you can, of course go for the simplest approach: Wrap your data change request into a transaction and rollback if idempotency must be honoured; you will leverage existing tools and have no unexpected side effects. As for me, the times I've implemented idempotency (at either API level or service endpoint level), a good idempotency key + TTL-limited caching of the results has proven useful.

Notes

[1] Not re-executing logic is an important remark. In a pure mathematical equivalence, where for example the number 1 is idempotent because no matter how many times you multiply by it N*1 = N, we could execute the logic every time and "just make sure the system stays the same". Now just imagine how quickly this can spiral into very complex logic, let me provide you with a simple example: ORM models usually keep modified_at timestamps, so if you want to be pure/strict, it shouldn't update more than once when running an idempotent change request twice; thus, you probably will need to use transactions everywhere, which can be a huge performance penalty. And this way we arrive to the alternative: "doing nothing". If we already know the output, the best way to not alter the system is not touching it, just returning the cached output and all is fine, we respect idempotency theory while ensuring system/data is kept exactly the same.


Course Review: Master English: 100 Phrasal verbs for IELTS (Udemy)

Master English: 100 Phrasal verbs for IELTS, TOEFL, CAE, FCE is, apart from a very SEO-oriented title, another 4 hours Udemy English course that I took to focus on practising phrasal verbs. Using 10 topics and a conversation for each, you'll do some exercises and learn multiple examples of usage of each of the hundred verbs.

Correctly done and pronounced and easy to follow, although in the conversations the speed is noticeably fast (so if you already listen it at 1.25x or 1.5x they speak really quickly), while I think I didn't learn anything new (I've completed more courses about phrasal verbs), it was a good practice and reminder exercise.


Previous entries