It might seem extreme, but one hobby (obsession?) that I have is to try to apply speed ups to my tiny blog to play with different techniques and tips, so achieving that small size, plus not using web-fonts (I hate the loading blinking until font is rendered and fallback fonts are a patch), keeping always the content minimized, supporting Gzip and cache expiration headers, make me feel good knowing it loads as fast as possible.
I don't apply everything I read about if is not worth it, though. As an example, deferring image loading to visible viewport is something I did in the past but the gain was so small (as I don't abuse images on posts, and many are pure text) that instead I started properly compressing images... well, really obsessively, as I applied Google's Guetzli compressor to all existing images (you can achieve around 20%-25% size saving on a JPG), so they simply load quickly. But then, with some images the size of any 85% quality exported JPG is already so small that is not worth the 2-3 minutes of Guetzli compression (at least on my old laptop), so I'm more selective now on what to squeeze and what to leave as a standard JPG.
The blog does not need it, but at my portfolio I've also inlined CSS (one less HTTP request) and the small company logos are a sprite sheet so it's also a single HTTP request instead of one per logo (plus some PNG palette reductions making the file weight ~10KB).
I also like to run "crazy" compatibility and speed tests from time to time, like browsing the blog on my Nintendo 3DS or with my Kindle (which is has to be worse and slower than most phones on Earth), just because "why not?", for fun. Browsing from an ebook reader having so powerful phones is absurd but a decent challenge.
Update: Link to the great Smashing Magazine's Front-End Performance Checklist 2019, a huge article containing most if not all state-of-the-art performance improvements and tips you can apply to web applications, static resources and the like. A must read.
Title: Game Engine Black Book: DOOM
Author: Fabien Sanglard
Second book from Fabien Sanglard analyzing the source code of another id Software hit, this time the awesome DOOM, released in 1993. Over more than 400 pages we get to learn about the 80486, NeXT workstations and in general hardware of those years, very relevant for the development and optimizations of the game. We of course get detailed explanations of the most relevant fragments of the source code (covering most of the engine), but an interesting newcomer for this book is a quite big section dedicated to the various console ports the game had and their "adventures".
It is amazing to learn how the 3D engine drawn based on the visplanes and the optimizations it had to do paint vertically, draw first the walls and then ceilings and floors, and multitude of internals that back then were never done before. It is also yet another example of how genius John Carmack is, when he designed a mostly multi-platform architecture so that only modifying 5 .c files the game would run on NeXT, MSDOS or other systems (today runs almost everywhere), abstracting all I/O, from VGA access to sound or input.
The following image is an example of the many diagrams (and interesting content) you will find inside, showcasing another topic I really liked: the artificial intelligence of the monsters and the "noise tricks" to simulate entity scripting.
As a retro gamer, I loved the section of the different ports of the game to consoles, with greatly detailed intros of each platform's hardware to pave the road to why most of the ports were poor than "a simple PC". Specially incredible is the story behind the Super Nintendo / Super Famicon port with the SuperFX2 chipset and reverse engineered DOOM code and data. In general, rendering 1 pixel-wide columns for consoles which were great with 3D but had no proper texture perspective correction is another clever idea I wouldn't have imagined even possible.
If DOOM became an instant classic for you either when it came out (or when you were able to have it, as for example in Spain until DOOM II came out was impossible to obtain the first one without resorting to piracy), this book is a must have read. Some concepts won't be easy to digest at first but it is both a testament of technical advancements and a small compedium of achieving the impossible (some console ports).
After first Wolfenstein 3D and now DOOM, I can only hope now that the author has the strength to write the third book about my all-time favourite, Quake :)
Not many talks, as a consequence of the work confidentiality and working at internal/unreleased projects, it is hard to find a topic to talk about. At least I got to do some cool stuff and gave a final ticketea-branded talk about MyPy before the acquisition.
Visited USA (Nashville and San Francisco) and UK (London) this year thanks to work.
Working hard on improving english now that I some days talk more with humans through videoconferences than in person.
Got to play a little with reinforced learning, nothing major but enough to now desire to go deeper. As usual, depending on free time and priorities (I have at least one personal project I'd like to go forward with before).
Read a few books, listened to lots of podcasts and lowered noise by reducing Twitter usage to only shoutbox + replies and doing a huge cleanup of RSS and other news sources. Quality reading and time spent matters a lot.
Changed approach to videogames: Less games at once, way more focus on each. Also built a Retro Pie system and stopped getting angry of Linux emulators breaking now and then.
Having some time blocks to do personal experiments. Also helps getting up 1h earlier for studing/reading.
More sad family issues (younger cousing passed away).
Focusing a lot lately on improving and enjoying personal life, family and pets. The dog is now healthy as ever but now one of the cats got sick. Sadly the social part is lagging behind but is on the radar to be improved whenever possible.
Another course from Udemy, another review. Master English: Improve Your Speaking, Listening, & Writing provides around 5 hours of content to improve your intermediate level. My small caveats or things to improve after going through it are:
As you can see, nothing major and content is good enough to be worth the cheap price.
This Udemy course includes around 5 hours of nicely done diagrams and code walkthroughs and demos so that we can learn quite a few things about Webpack. It not only teaches the tool itself, including quite a few gotchas and basic plugins, but also extra plugins listed for Babel, automating script bundles injection into index.html, chunk-hash into filenames, bundling CSS & outputing them as single CSS file (vs inlining), React & Redux specific setups, and advices like code splitting based on React-Router routes.
After the broad topics, there are sections for developing with and deploying both static and non-static websites, some deployment examples (although all of them with utils so don't expect a detailed AWS S3 deploy) and explains how to use webpack middleware for Express (uses Node & Express as the backend server for the non-static website).
Considering it is a tool, so at least for me a topic that can easily get boring, it covered everything I expected, plus some extras you might find or not so interesting.