Kartones Blog

Be the change you wanna see in this world

Having a good, disposable devbox

Around 2005 virtualization already was working nicely, and at work, as we did .NET consulting, we started using virtual machines (with Virtual PC) as our main development environment. We would have a base VM snapshot with a Visual Studio, and then when starting a project we'd just clone it and add specific requirements (e.g. SQL Server). It was pretty much manual but still a great improvement over having to clean or even format your host machine between projects. Also, migrating to new hardware was seamless, just copy the VM image and good to go.

Now, things have evolved a lot, and having switched to a mostly opensource stack and Linux development environment there's a wider array of options, plus greater automation capabilities. I've never been directly involved in the management of development tools resources or projects (except one or two small scripts) until recently, but I've tried to use whatever environments they provided, with varied results.

Keeping the "optimal" scenario of having everything on your machine (is the fastest and quickest in the short term but has lots of disadvantages too), I moved from manually managed local VMs to having remote dev machines, where you would rsync files, SSH when needed to restart a process, and usually deployed your code to another location (being web dev, mostly having a local dev with your kartones.localhost.lan and remote webserver like kartones.xxx.dev). This approach is not bad (as worked for me for quite a while and in multple jobs) but has two big disadvantages:

  • No connection means can't work: You have to setup one (or more) backup DSL lines with a different ISP at the office for outages (which at least in Spain are not so infrequent)
  • The noisy neighbor problem: If you share your machine with another 4 developers and your build process is CPU or IO heavy, or you must run some Hadoop map-reduces, you can easily eat all resources and impede other's work

Being few people the remote dev machines is a good approach, but as you grow it becomes a severe limitation.

So, how do we solve this issues? Well, thanks to Virtual Box, Vagrant and Puppet we can now easily have provisioned development virtual machines: Local but instrumentalized VMs that closely match a production server and whose configuration and installed packages are managed from the same tool that setups production machines, just requiring different config sections (but mostly being a copy + paste + rename task). I've lived three iterations of this approach at different jobs, from a quite manually (and badly working) version, to a "working but not smooth enough to replace a local dev env"  and to my current job setup, which works so nice we now don't support anything excepting the devbox.

It took us weeks of iterating and forcing the whole tech team to install it by themselves, just following the README instructions and providing either feedback or directly commits with improvements, but feels worth it because:

  • We're all on the same enviroment: Problems "are shared" so quickly solved and easily reproducible. If something breaks, breaks for everyone so no broken windows effect
  • Process is dead easy to follow: I try to push every service and tool to have a README.md detailing instructions, but in this case the more we use it the easier it gets and more we improve and automate it
  • Fast and isolated: Not native-speed but the faster your hardware the faster it goes, and you never hurt other team members' speed with heavy scripts you run
  • No need to depend on external storage for a backup: In the past, I used to carry a USB Donge with a backup of the VM just in case the original died, had a fatal update (Ubuntu is great but I've more than once had some update break the VM at boot) or you just want to revert some undesired change
  • Almost the same environment as production: This ultimately depends on you, but the closer you get to replicating production the easier you triage configuration issues regarding web server, caching, connection pools...
  • Easily updateable: Linux kernel updates, provisioned software updates, individual repository dependency updates... Everything handled via single commands
  • Everybody participates: Have an idea to automate something? Code it and push it!
  • Helps keeping codebases homogeneous: Having templates for microservices and web-apps is handy, but having lots of services that have to be configured, launched, tested, etc. means you naturally try to setup conventions of folder and code struture, helper scripts, launchers/runners... Make easy doing things the right way and it will yield better results (or at least make hard to go wrong!).

 

We bet so hard on having this process quick, easy and painless that if I was allowed to, I'd setup the devboxes to self-destruct after 2 weeks of use, to force everybody to re-install them and be always sure that no matter what happens, you can reprovision and have a working dev environment in a few minutes. I manually do delete mine (including the code repositories) and you feel at peace and calm when you just do:

  1. Clone the operations repository
  2. vagrant provision
  3. vagrant ssh + run install.sh script providing your desired username

 

And this is just the beginning, now with containers with Docker & the like we're moving towards an "optimized" version where you can replicate something really like production, in your local machine, with disposable instances, always updated (and using the same mechanisms than production, to avoid nasty errors) and doing a much better resource usage. But I have not talked about them because we haven't yet migrated to containers, so I have much to learn and experiment before being in a position to give an opinion, I'm just eager to try it!

A few articles and tools about ZX Spectrum programming

The ZX Spectrum was a classic from the eighties, with its long loading times and 15 colors. It wasn't the fastest nor the best but it definetly had really good games and it's relatively cheap price made it common when I was young (before videoconsoles took its place).

Double Dragon on the Spectrum

While I don't have much time to learn how to code for it (and if I had, I'd focus on learning more original GameBoy development), I love when people write tutorials and/or posts about how certain stuff from old machines worked, and I just found some jewels which are also quite up to date (as a matter of fact, the author is still writing posts in the series). Just take a look at them and learn how things worked in the Speccy, and how more actual techniques can be applied to squeeze some extra speed or improve the development process:

Also, I went on to read a bit about the sprite color glitches and found that after the link's main article, in the comments section there are a few examples of how to minimize the problem, make sure not to skip them as the article only mentions the issue but doesn't provides solutions.

The full toolset to develop for the spectrum can be found at https://github.com/jarikomppa/speccy, including some custom-made tools, an image conversor for the Spectrum, and even a sample game, Solargun.


Update #1: Added img2spec repo link.

Code and style checks for Python at Sublime Text

As now I'm coding with Python (and learning it), here comes a similar post as the one I did for Ruby but adapted. Basically I want to centralize my coding to use if possible just one IDE for every language and file, so while probably PyCharm is better (as usually JetBrains tools are awesome) I get more flexibility with Sublime Text. Here are the specific packages I use:

  • Package Control: Plugin/package manager, required for the other components
  • SublimeLinter 3: Generic text linter for the editor. Required for the specific linters
  • PEP8 (Style guide for Python coding): pip install pep8
  • SublimeLinter-PEP8: To have the PEP8 rules inside the editor

Despite having a tox.ini pep8 configuration file, if you see Sublime ignoring you, you can force some configuration settings for any linter going to:

Preferences -> Package Settings -> SublimeLinter -> Settings – User

And then adding custom rules under the pep8 section, e.g.:

"pep8": {
    "max-line-length": 120
}

 

I have also pending review another pack, as flake8 is sometimes very very strict:

 

And that's all for now, if I add more interesting packages I'll update the post, and of course comments and suggestions are welcome.

UEFI, NVMe and being stubborn

This weekend I wanted to install Linux on my new work laptop (a Dell XPS 13 9350, just in case someone else runs into similar issues). As in the past I had some issues with UEFI booting and Ubuntu (versions 12 & 14), the first thing I did was to go to the BIOS and proceed to unenforce secure boot and enable legacy boot (the classic BIOS), tried to install Unbutu 15.10 from a USBdrive... and it was broke trying to install GRUB after the install itself.

An initial research about the failure trying to setup the bootloader at /dev/nvme0p1 (instead of the classic /dev/sda1) taught me about NVM Express controllers (aka NVME). I thought that maybe updating gparted to 0.24 (which supports MVNE) would be solved. To do that I:

  1. Booted Ubuntu 15.10 from the USB
  2. Installed this package
  3. sudo apt-get install gparted
  4. Create the partitions from gparted and when installing just use them (wiping the data but not recreating anything)

It didn't worked out :(  I could see the hard disk partitions but install would still fail at the bootloader (final) step.

Next I tried to just reinstall GRUB bootloader (using Boot-Repair), with some retries recompiling GRUB and even updating the partition Linux kernel to latest one to be sure... without luck. Bootloader was installed but couldn't boot the OS.

Two afternoons later I decided to do one last attempt before giving up: As the laptop's boot menu allows me to run the USBdrive Ubuntu install using UEFI (instead of "legacy boot"), I just tried running it to see what would do... And it worked!

If I had just RTFM about ubuntu UEFI support I would have seen that now it works and that Ubuntu 15.10 can somehow manage NVME partitions at install time (despite having an old version of gparted...). Anyway, I learned about some recent developments in "BIOS" and HDD firmwares so not all was wasted time & effort.

Also it is interesting to see how Intel seems to be leading in this evolutionary changes by presenting specifications and opening them to others so they become standards.

 

Note: A the time of this writing, everything works fine in the XPS 9350 model with the mentioned Ubuntu 15.10 except the wifi, which seems to be a "too new" Broadcom model and doesn't even gets detected, so I'm stuck for the time being with having to rely on an external USB wifi donge.