Last month I forgot to post anything so here comes one of the books I read some weeks ago, Speccy Nation. Small, cheap and not especially good, but hey, I paid less than 2€ for it so what could I expect...
Title: Speccy Nation
Author: Dan Whitehead
A small 124 pages B&W book about 50 ZX Spectrum games chosen from a British perspective. Nothing less, nothing more. Some are terrible games (chosen on purpose), some are really good classic ones, but I really miss international games, as for example Spain had lots of really good titles.
The writing is good, even in the case of really straightforward and even simple titles, knowing when to simply explain how the game worked and when to expand with what made it really different from others.
Some chapters about the Spectrum itself or its history would have been welcomed, but we get exactly what we're told about. The description of games vary in length, some being a bit small (1 page including a screenshot) others being 2 pages long, sometimes they feel not extense enough and in a few cases you don't even get to know exactly how the game was played, but is the exception, not the norm.
Overall, a really cheap title so if you feel nostalgic is a good quick read to maybe grab ideas of some games to play (if you're able to get them).
Last weeked I found some spam at two of my BlogEngine.NET blogs. It is not the first time, and in the past updating to the latest major solved the issue, but this time I had to switch from 2.9.X to 3.2, and I already suffered a migration from 1.9 to 2.0 that gave quite some headaches. One of my main reasons to go far away from Wordpress was to stop this tiring battle between spammers and new versions, that forced you to update way too frequently or face serious security bugs and spam-holes. Combine that with a general feeling of being tired of big, admin-driven blog engines, and I needed a big change.
My premises were:
Reading some articles and checking some static generators, I found Pelican. I peeked at bit at the source code, documentation and plugins, and it looked quite simple. Did some local tests, wrote some posts using existing content and read the documentation, and decided to keep it.
The setup is so easy I got a blog running locally with some base theme in a few minutes. But I had one issue, all my posts are in XMLs (thankfully I had chosen not to use a DB for storage) and Pelican uses Markdown... so I had to transform the data somehow.
I don't do too complex stuff like
<meta> keywords and I set some post tags but not even show them, so in general I just needed a few fields for the pelican post format:
Title: ... Slug: ... Date: ... Tags: ...,...
...CONTENT (which can be directly HTML)...
The XML has a very simple structure, with all the fields and just HTML-encoded the content, some regular expression searches and reverse transformations were enough to port everything. I have plans to migrate another blog, and prefer to be able to reproduce the whole migration any number of times, I built a small script. It only handles basic stuff and doesn't extracts other "basic" fields like authors, and only works with posts (I manually migrated the pages as I had few), but it does what it does well and might be of some use to somebody else.
You can find both my scripts and a small plugin (to limit RSS/Atom syndication feed output to only a certain amount of items, as by default dumps every post ever written) at my GitHub.
I'll probably work on more small plugins in the future, as I have some improvement ideas regarding output content generated, but I can't be happier with the results. A proper example of "the Python way of life", simple yet practical code, easy to setup and use and does it's job without many features.
UPDATE: Added another small script to GitHub to do some post-generation tasks like creating duplicates (as "aliases") and moving or removing certain files from the output folder.
Due to my new job I wanted to learn about microservices, and this book had good reputation and recommendations, so I decided to read it. After finishing the read yesterday, I can only recommend it to anyone starting with this system design principles.
Title: Building Microservices
Author: Sam Newman
I'm pretty new to microservices, but working in a startup means you usually have to ramp-up from not knowing something to being able to use it on a daily basis, and quickly. While I have an experienced colleage in the subject, I cannot be asking him always, so I grabbed this book based on some recommendations, and have been reading it& on commute time. And spoiling the review, I definetly think it is worth it.
I come from& a background of classic monolithic web-apps, a SOA platform and a mostly shared-codebase set of REST APIs and components, so for me it's a process of true discovery and mindset revolution to think in really small, bounded and clearly defined services. It is too easy to cross those limits and do more things in the same place, or by mistake create dependencies or couplings. One of the first services I built few months ago acts really as a distributed synchronous component, not honouring the microservices "autonomous" principle... So I have a lot to learn, but this book really has taught me how to do things better.
Always including real world scenarios and stories from the author, we're slowly introduced to the basic pillars of a microservices architecture, to the why, what and how to both build from scratch and separate existing systems into pieces, decorated with lots of advices and curiously not so many dogmas, but tips and "usual solutions" (but not mandatory) about approaches you can take. I like the fact that many books speak about "universal truths" where here the tone is more like "decide by yourself and take my advice as guidance".
Moving towards asynchronous processing and communications, how to handle code reuse, libraries and shared components, orchestation (or why is it good to avoid it), metrics, monitoring, cross-functional requirements, testing, deployment, security, scaling... There are so many concepts at first that the last chapter of the book is a "quick" summary that I think we should print and leave at the office as our small "microservices builder manual".
I had been told about some of the concepts, thought about few ones (but as usually happens with design patterns, not exactly in the best way) but the book provides lots of proper namings and topics you then can read more specifically about (like caching for resilience, synthetic transactions, consumer-driven contracts...).
I could go on longer, but I'd summarize it as a must read book if you work with microservices. Highly recommended.
"Keep It Simple, Stupid": A nice design principle, usualy forgotten in the development world.
This post is mainly a list of those small pet projects and tools, nothing to be proud of technically nor visually, but that really do help me optimize my time and/or ease some tasks. So, here comes the list:
And probably more small tools that I now don't remember, like snippets to generate base64 URLs for to embed images into HTML, hash calculators,... My goal as a developer is to make my life easier, because work already provides the challenges and hard thinking.
One thing all this examples don't imply is the need to build everything from scratch, that would be dumb, it just means that for certain scenarios, you don't need to strive for perfection. I for example use a blogging platform not made from scratch but existing, but I simplified it a working but really old engine to the simpler, not DB-based BlogEngine.net it currently runs with. It has some tweaks and optimizations but maintenance is easy and a local copy for development required 3 minutes of configuring Internet Information Server at Windows.
In the end, what matters is to have something working and adding value, instead of a fancy 90% code coverage application that doesn't even have a single feature fully implemented. My "home code" won't win any award but it works great.
Around 2005 virtualization already was working nicely, and at work, as we did .NET consulting, we started using virtual machines (with Virtual PC) as our main development environment. We would have a base VM snapshot with a Visual Studio, and then when starting a project we'd just clone it and add specific requirements (e.g. SQL Server). It was pretty much manual but still a great improvement over having to clean or even format your host machine between projects. Also, migrating to new hardware was seamless, just copy the VM image and good to go.
Now, things have evolved a lot, and having switched to a mostly opensource stack and Linux development environment there's a wider array of options, plus greater automation capabilities. I've never been directly involved in the management of development tools resources or projects (except one or two small scripts) until recently, but I've tried to use whatever environments they provided, with varied results.
Keeping the "optimal" scenario of having everything on your machine (is the fastest and quickest in the short term but has lots of disadvantages too), I moved from manually managed local VMs to having remote dev machines, where you would rsync files, SSH when needed to restart a process, and usually deployed your code to another location (being web dev, mostly having a local dev with your kartones.localhost.lan and remote webserver like kartones.xxx.dev). This approach is not bad (as worked for me for quite a while and in multple jobs) but has two big disadvantages:
Being few people the remote dev machines is a good approach, but as you grow it becomes a severe limitation.
So, how do we solve this issues? Well, thanks to Virtual Box, Vagrant and Puppet we can now easily have provisioned development virtual machines: Local but instrumentalized VMs that closely match a production server and whose configuration and installed packages are managed from the same tool that setups production machines, just requiring different config sections (but mostly being a copy + paste + rename task). I've lived three iterations of this approach at different jobs, from a quite manually (and badly working) version, to a "working but not smooth enough to replace a local dev env" and to my current job setup, which works so nice we now don't support anything excepting the devbox.
It took us weeks of iterating and forcing the whole tech team to install it by themselves, just following the README instructions and providing either feedback or directly commits with improvements, but feels worth it because:
We bet so hard on having this process quick, easy and painless that if I was allowed to, I'd setup the devboxes to self-destruct after 2 weeks of use, to force everybody to re-install them and be always sure that no matter what happens, you can reprovision and have a working dev environment in a few minutes. I manually do delete mine (including the code repositories) and you feel at peace and calm when you just do:
And this is just the beginning, now with containers with Docker & the like we're moving towards an "optimized" version where you can replicate something really like production, in your local machine, with disposable instances, always updated (and using the same mechanisms than production, to avoid nasty errors) and doing a much better resource usage. But I have not talked about them because we haven't yet migrated to containers, so I have much to learn and experiment before being in a position to give an opinion, I'm just eager to try it!