Kartones Blog

Be the change you wanna see in this world

Double-Linked List Python implementation

A while ago I read an article about implementing linked lists on Python and to practice I reproduced the same interface with some additions (adding a tail, mypy type hints and a few tests).

Then I forgot about it, until recently I read something about sorting linked lists (I think in the DOOM source code analysis book I've read but it's not relevant) and I thought "I have this Python implementation, why not practice ordering that list?", so I grabbed the code, started working on it and, apart from a bugfix (tests are wonderful but you need to do proper ones and not just happy cases), I ended up adding new functions like insert_after(), sort(), flip(), clear(), making the linked list iterable and improving some operations performance along the way (also refreshing the big-O notation).

The beauty of having spent some time iterating on adding those new functions and "enhancements" is that now the code looks so slick and still useful, as you can specify your own sorting logic, normal or reverse, etc.. Here's for example the relevant code to make the list iterable and sortable:

def __init__(self) -> None:
    self.head: Optional["LinkedListNode"] = None
    self.tail: Optional["LinkedListNode"] = None

    self._iteration_cursor: Optional[LinkedListNode] = self.head

def __iter__(self) -> "LinkedList":
    self._iteration_cursor = self.head
    return self

def __next__(self) -> LinkedListNode:
    if self._iteration_cursor:
        current_node = self._iteration_cursor
        raise StopIteration
    self._iteration_cursor = self._iteration_cursor.next_node
    return current_node

# ...

def sort(self, comparison_function: Callable, reverse: bool = False) -> None:
    ordered_list = sorted(self, key=comparison_function, reverse=reverse)


    for node in ordered_list:

And a sample usage taken from the tests:

def node_comparison_function(node):
    return int(node.data["id"]) if node else 0

node_1_data = {"id": 1}
node_2_data = {"id": 2}
node_3_data = {"id": 3}
node_4_data = {"id": 4}

linked_list = LinkedList()

# 1 2 3 4 -> 4 3 2 1
linked_list.sort(comparison_function=node_comparison_function, reverse=True)

It was a nice exercise and after yesterday a friend sending me a link to Nicholas Zakas Javascript implementation of a single linked list I like how my approach is quite extensible and fast, but in exchange requires you to explicitly find the nodes before being able to remove() or insert_after(). Although having to write list.remove(list.find(xxxx)) instead of list.remove(xxxx) doesn't feels terrible to me (explicit over implicit?), and the list interface becomes more tight as uses nodes everywhere possible instead of raw data, leaving that for inserts and searches only.

All of this was just meant to say that if you want a Python double-linked list implementation quite deeply tested, type annotated, with reversing, inserting at head/tail/anywhere, searching and sorting by custom function in both directions, I built one: https://github.com/Kartones/PythonAssorted/tree/master/double-linked-list .

Speeding up my websites for fun

Last weekend I did one of those typical boring tasks you always feel too lazy to do: Checking the whole blog CSS to remove all the unused Bootstrap clutter. Some time ago I fully removed all the javascript from the site (excepting an async stats ping ajax call using vanilla js and deferred 5 seconds) and I never used comments or forms since I migrated to the blog to Pelican (almost 3 years ago), so I was expecting lots of unused stuff... but the CSS went from around 50KB minimized, to less than 15KB!

It might seem extreme, but one hobby (obsession?) that I have is to try to apply speed ups to my tiny blog to play with different techniques and tips, so achieving that small size, plus not using web-fonts (I hate the loading blinking until font is rendered and fallback fonts are a patch), keeping always the content minimized, supporting Gzip and cache expiration headers, make me feel good knowing it loads as fast as possible.

I don't apply everything I read about if is not worth it, though. As an example, deferring image loading to visible viewport is something I did in the past but the gain was so small (as I don't abuse images on posts, and many are pure text) that instead I started properly compressing images... well, really obsessively, as I applied Google's Guetzli compressor to all existing images (you can achieve around 20%-25% size saving on a JPG), so they simply load quickly. But then, with some images the size of any 85% quality exported JPG is already so small that is not worth the 2-3 minutes of Guetzli compression (at least on my old laptop), so I'm more selective now on what to squeeze and what to leave as a standard JPG.

The blog does not need it, but at my portfolio I've also inlined CSS (one less HTTP request) and the small company logos are a sprite sheet so it's also a single HTTP request instead of one per logo (plus some PNG palette reductions making the file weight ~10KB).

I also like to run "crazy" compatibility and speed tests from time to time, like browsing the blog on my Nintendo 3DS or with my Kindle (which is has to be worse and slower than most phones on Earth), just because "why not?", for fun. Browsing from an ebook reader having so powerful phones is absurd but a decent challenge.

And all of this comes to light after reading an interesting post , The Cost of Javascript In 2018, which contains lots of interesting advices on pretty actual techniques (specially if you are working with SPAs and/or modern javascript frameworks). Definetly recommended reading.

Update: Link to the great Smashing Magazine's Front-End Performance Checklist 2019, a huge article containing most if not all state-of-the-art performance improvements and tips you can apply to web applications, static resources and the like. A must read.

Book Review: Game Engine Black Book: DOOM


Game Engine Black Book: Wolfenstein 3D

Title: Game Engine Black Book: DOOM

Author: Fabien Sanglard

Second book from Fabien Sanglard analyzing the source code of another id Software hit, this time the awesome DOOM, released in 1993. Over more than 400 pages we get to learn about the 80486, NeXT workstations and in general hardware of those years, very relevant for the development and optimizations of the game. We of course get detailed explanations of the most relevant fragments of the source code (covering most of the engine), but an interesting newcomer for this book is a quite big section dedicated to the various console ports the game had and their "adventures".

It is amazing to learn how the 3D engine drawn based on the visplanes and the optimizations it had to do paint vertically, draw first the walls and then ceilings and floors, and multitude of internals that back then were never done before. It is also yet another example of how genius John Carmack is, when he designed a mostly multi-platform architecture so that only modifying 5 .c files the game would run on NeXT, MSDOS or other systems (today runs almost everywhere), abstracting all I/O, from VGA access to sound or input.

The following image is an example of the many diagrams (and interesting content) you will find inside, showcasing another topic I really liked: the artificial intelligence of the monsters and the "noise tricks" to simulate entity scripting.

Sample DOOM book page

As a retro gamer, I loved the section of the different ports of the game to consoles, with greatly detailed intros of each platform's hardware to pave the road to why most of the ports were poor than "a simple PC". Specially incredible is the story behind the Super Nintendo / Super Famicon port with the SuperFX2 chipset and reverse engineered DOOM code and data. In general, rendering 1 pixel-wide columns for consoles which were great with 3D but had no proper texture perspective correction is another clever idea I wouldn't have imagined even possible.

If DOOM became an instant classic for you either when it came out (or when you were able to have it, as for example in Spain until DOOM II came out was impossible to obtain the first one without resorting to piracy), this book is a must have read. Some concepts won't be easy to digest at first but it is both a testament of technical advancements and a small compedium of achieving the impossible (some console ports).

After first Wolfenstein 3D and now DOOM, I can only hope now that the author has the strength to write the third book about my all-time favourite, Quake :)

2018 Recap

ticketea got acquired by Eventbrite, and I got promoted to the shiny title of Principal Engineer, aka "keep working on tech stuff without managing people". Same (but bigger business), similar tech stack (Python, MySQL, Redis), different day to day work. While still doing mainly backend, I'm learning ES6 and React to also break Javascript stuff. And doing some tiny architecture proposals and misc tasks. Too many NDAs so not talking much about work for the time being.

Not many talks, as a consequence of the work confidentiality and working at internal/unreleased projects, it is hard to find a topic to talk about. At least I got to do some cool stuff and gave a final ticketea-branded talk about MyPy before the acquisition.

Visited USA (Nashville and San Francisco) and UK (London) this year thanks to work.

Working hard on improving english now that I some days talk more with humans through videoconferences than in person.

Got to play a little with reinforced learning, nothing major but enough to now desire to go deeper. As usual, depending on free time and priorities (I have at least one personal project I'd like to go forward with before).

Read a few books, listened to lots of podcasts and lowered noise by reducing Twitter usage to only shoutbox + replies and doing a huge cleanup of RSS and other news sources. Quality reading and time spent matters a lot.

Changed approach to videogames: Less games at once, way more focus on each. Also built a Retro Pie system and stopped getting angry of Linux emulators breaking now and then.

Having some time blocks to do personal experiments. Also helps getting up 1h earlier for studing/reading.

More sad family issues (younger cousing passed away).

Focusing a lot lately on improving and enjoying personal life, family and pets. The dog is now healthy as ever but now one of the cats got sick. Sadly the social part is lagging behind but is on the radar to be improved whenever possible.

Course Review: Master English (Udemy)

Another course from Udemy, another review. Master English: Improve Your Speaking, Listening, & Writing provides around 5 hours of content to improve your intermediate level. My small caveats or things to improve after going through it are:

  • Some mistakes not edited/corrected. I understand that it takes time, but leaving them looks unprofessional. Once or twice is fine to correct on the fly a mistake and keep on, but happens enough to be noticeable
  • There is background noise at some videos, and a few ones sound metallic as if recorded with lower-quality microphone. Again lowers the perceived quality
  • Quality of the slides is varied, in general they are good but some images are terribly low quality, a few images have black text over them (making hard to read anything), and sometimes the speaker does the mouse cursor dance thingy (appearing or directly "pointing out" text), which at least to me is quite annoying
  • Topics are very varied, as in "a few ones might not interest you at all"

As you can see, nothing major and content is good enough to be worth the cheap price.

Previous entries