Kartones Blog

Be the change you wanna see in this world

OpenSSL certificate verify failed on Ruby & Windows

I was just checking a more automated way of cleaning my non-recent Twitter posts when, running the small program, I got hit by an error like:

OpenSSL::SSL::SSLError: SSL_connect returned=1 errno=0 state=...
read server certificate ...: certificate verify failed

 

If you check around the net, the first solution they say is to add this dangerous line:

OpenSSL::SSL::VERIFY_PEER = OpenSSL::SSL::VERIFY_NONE

 

But, deactivating security is not the best approach, so after some digging I came across a nice post that explains the issues, how to debug the specific problem, and partially how to solve it. As I use Windows, the instructions weren't so complete, but it all sums to:

  • Download a CA certificate bundle, like for example Mozilla's
  • Install the .crt file (I installed it for All Users)
  • Setup the following OpenSSL system variables:
    • SSL_CERT_DIR: Point to where you left the CA bundle
    • SSL_CERT_FILE: Point to the .pem file

Restart your command line, and you should be able to run with SSL peer verification active (as you should).

Migration to IIS 8.0 and the new blog(s)

I've been running Kartones.net since 2006, and from the beginning used Community Server (2007, then the 2007.1 patch). Partly because was the cool .NET blogging engine back then (there were others like .Text, but CS had forums, a file manager...) but also because it looked robust and had nice documentation.

Over the years I've built some small components, modules and http handlers over it, and performed lots of tweaks and optimizations, but I also suffered one ugly scenario: The company behind CS2007 shutdown the old SDK and documentation, leaving me with just blog posts from other people and reverse engineering to keep maintaining my installation. This has been the main reason why I won't use again a commercial software that is not based on a fully opensource codebase; The core of Community Server 2007 had some Assemblies without source code, plus no documentation, meant nights of debugging for some errors or new features I wanted to build.

 

Last week my hosting provider informed me that I was switching to a newer machine, with SQL Server 2012 and Windows Server 2012 and IIS 8.0. Coming from SQLS2008 and IIS 7.0, and keeping .NET 2.0/4.0 should have been a smoothly change... but things are never easy.

After switching, the site refused to load, not even detecting my Web.config changes to activate the detailed ASP.NET error messages. As CS stores exception traces in the DB but needs the core to be running, I had no error traces either. Cool deadlock.

So, I went for the typical unkown scenario situation: Replicate it. I grabbed my Windows Server 2012 VM, setup inside IIS 8.0, left SQL Server 2008 as the DB was originally that and shouldn't affect. Deployed a copy of my site, restored a backup of the DB, modified connection strings, and run. And I got the same problem locally.

I spent hours trying to debug it, to nail the problem, down to the tiring process of removing fragments of Web.config to try to nail down what block was failing, and found that was the core Community Server files. Removing them the site would at least load (do nothing but ASP.NET completed its flow). Add them and errors everywhere.

I installed .NET 3.5 too (which includes 2.0), setup all variations of IIS Application Pools (integrated pipeline/classic pipeline with each FW), and I was only able to get a pseudo-detailed error trying to run some CS controls inside the homepage latest posts Repeater list.

This was indicating that something inside had changed and my .NET was not easily portable to IIS 8, and I'm no longer willing to patch a 7 year old software piece so problematic (when I upgraded .NET to 3.5 I already had issues, as I had later with .NET 4.0), so the decision was clear: How the hell I get my data out of this dinosaur and put it elsewhere.

 

I've already suffered Wordpress, and moved out of it because PHP is already a not so beautiful language to also have to suffer a terrible and insecure codebase. I've also had small issues in the past due to outages in the DB layer (SQL Server going down, being in a shared hosting environment reduces its reliability)... if I could remove that requirement would be great... so BlogEngine.net was the winner.

Researching a bit I found that BlogEngine supports BlogML out of the box and that somebody wrote a CS2007 to BlogML converter, so the escape plan was ready. This tips for exporting to BlogML were also quite handy as I suffered most issues commented at it.

 

I've killed the Community Server problem, along the way cleaned a lot of dead stuff, but I've also broken from the landing page CSS (there's none right now) to most old links that had a ".aspx" suffix. I've migrated the main blogs and critical old sections to subdomains, I've setup email notifications, auto-tweeting upon new posts and similar stuff, but there's still quite some work ahead:

  • New Kartones.Net landing page design
  • Themes for the 4 remaining blogs (old ones converted to static files are fine)
  • Fixing urls inside all migrated posts that link to other posts
  • Small customizations and tweaks that all blogs had (for Example Vicio Como Monos only shows certain categories/tags, handling others like genres)
  • Image galleries (at least I have one already done for BlogEngine)
  • Caching and optimizations: BE is not slow but has lots of stuff I don't want and does too many http requests per page. Old blog loaded all but images on average in under a second

 

I've also had the benefit of past nice actions a approaches that made this less terrible, such as:

  • Separate subdomain for images. Not only they load faster, but I didn't had to change any of them now, as the URL is the same.
  • Getting experience with ASP.NET/IIS url rewrites. Not only full rewrite rules but also the partial url mappings. RSS and most critical paths were rerouted in no time to time.
  • Migration plans researched. I didn't did a proof of concept, but I already had lots of migration alternatives, from Jekyll or Node.JS to BlogML or using Wordpress as an intermediate step to "something better". I also had some experience with BlogEngine so basic setup was a no-brainer.

 

Unfortunately, there are some "breaking changes" on all this, mainly regarding RSS and urls. The new (remaining) blog urls are:

The old general site feed, aggregating all blogs, is now gone, and will redirect to this blog's RSS. If you want all RSS you must now subscribe to each individual blog. Sorry but not being anymore a blog community but individual blogs had not much sense. The Twitter account KartonesNet will remain active, though, because is quite easy to auto-post from all blogs there.

 

In summary, welcome to the new (forced) version of my blog ;)

Book Review: Obsequium

Obsequium book cover

Title: Obsequium

Author: Jaume Esteve, José Luis Sanz, Juan Manuel Moreno, Antonio Giner, Manuel Pazos, José Manuel Fernández, Enrique Colinet, José Manuel Braña Álvarez

Editorial: -

Obsequium is an ebook fully dedicated to one game, La Abadía del Crimen.

Fragments of the book sadly are a rewrite (sometimes with almost the exact same words) of Ocho Quilates references to the game. After having read it so recently, it is sad to see such lack of fresh content.

Lots of filling background that I already know and don't care about, while what I'm supossed to be reading is a dissection of a specific videogame, not the golden age of spanish 8bit games (again). 

Chapter/Day III is two-fold, one one side gives an excellent explanation of the isometric system and why the videogame applied it, and on the other side provides comparisons with the movie and the book (related to an english online essay).

Day 4 is where the content gets really interesting for me (as a developer), with detailed programming hinsights from one of the two spanish experts, Manuel Pazos, who also has done some conversions of the original game by reverse engineering. Here are really cool things, like how compression of texts, maps, graphics and even the AI is performed, some of the tricks to save memory and speed... real retro game dev. Too bad it feels short, as is the kind of content I wished the full book was about.

The remaining chapters are not bad and provide interesting research as for example all remakes and adaptations the game has had (really interesting) or how some of its design decisions stand as of today's standards for games.

Overall, I expected way more details and deep research of the game internals, and found some nice chapters but a general feeling of a merely decent read.

 

Read this and other reviews at my Book reviews page.

Book Review: Ocho Quilates (La edad de Oro del software español)

Book 1 Cover Book 2 Cover

Title: Ocho Quilates (Una historia de la Edad de Oro del software español) I: 1983-1986 & II: 1987-1992

Author: Jaume Esteve Gutiérrez

Editorial: Star-T Magazine Books

 

Ocho Quilates is a two-part book that covers the most prolific decade for Spain videogame scene, the era of the 8-bit computers: Amstrad, Spectrum, Commodore and MSX. A decade where many game companies were born and created games that marked our early life.

I am fully biased as I had an Amstrad PCW 8256 and I grew playing those damm hard Opera Soft, Topo Soft and Dinamic games, plus I remember the distributors back then, the magazines my parents bought me, full of those games with incredible covers and sometimes not so cool gameplay...

This books gather as much info and interviews as possible, chronologically ordering them so that we can learn what happened behind the scenes. It also sadly tells how we Spaniards got inside a bubble of rejecting the 16-bits era until it was too late and the Nintendo-SEGA duo eat our national market (also affected by piracy) and killed all spanish game dev companies with few exceptions.

Lots of references (actually hundreds of them!), lots of people mentioned, games and their history... it is the best way to keep the memories of what happened between 1983 and 1992.

If anything wrong with the book, sometimes the references are so abundant two or three are together and thus hard to click in the Kindle. And a few of them of the second book were broken (showing "attr error" instead of the ref. description).

A book you have to read if you lived that era. Nostalgia++.

 

Read this and other reviews at my Book reviews page.

Non-trivial Rails 3.x routing

Disclaimer: There might be a better solution for this scenario. My expertise with Ruby and Rails is just 6 months. You're welcome to drop me a tweet or a message with better approaches.

 

At CartoDB, as one of the required steps for the 3.0 version we needed recently to change the URLs from the "classic" format of USER.cartodb.com/whatever to ORG.cartodb.com/u/USER/whatever .

This is a change that usually gives lots of headaches. At a previous job a similar change required a huge refactor of the MVC engine and hyperlink building system. At another was quicker but just because the only solution was to do an (ugly) regex hack deep inside the MVC framework.

Rails is initially all unicorns and rainbows. A magical routing system that allows to reduce written code, that maps automatically verbs to controller actions, that even differentiates between acting upon items or collections... A developer's heaven regarding MVC configuration ease.

Except that for advanced scenarios, this magic fades away and you need to fallback to more traditional and detailed config.

This is great for the majority of typical websites:

scope '/admin' do
  resources :posts, :comments
end

 

But now imagine this new rules:

  • Any url might come, or might not, with an additional fragment, including a variable
  • This fragment might be optional, or might be mandatory

 

How do you specify an optional parameter at the Rails routes file? Like this:

get '(/u/:user_domain)/dashboard/tables' => ...

 

Looks easy... but remember that the param is optional. It might not be present... so we need to make sure it is always either sent or nil (but defined) so the code doesn't breaks. For this I implemented a before_filter at the base ApplicationController so it is always present.

 

Then, everything looked ready... until I checked the code and there was a myriad of links bult in all possible Rails ways: relative hardcoded, full path hardcoded, xxxx_path, xxxx_url, redirect_to xxxx, link_to xxxx...
And not only that, I also had to support that optional :user_domain parameter in most URLs, plus other ids or params sent like this sample route:

(/u/:user_domain)/dashboard/visualizations/shared/tag/:tag/:page

 

In the end, I decided to take the following new (and mandatory from now on) approach:

  • Full "literal-based" routes descriptions. They don´t look fancy or like magic anymore but they are practical, work and anybody not even knowing Rails knows what it points to.
  • Always given a name/alias (" as xxxx"). So they can be called from views and controllers with _url/_path helpers without collisions, ambiguity or Rails magical way of autogenerating URLs (that has given some problems and for example don't allow parameters).
  • Every application link has to be built with _url/_path. No more handcrafted URLs.

This makes links to URLs a bit bigger than usual:

<%= link_to tag_name, public_tag_url(user_domain: params[:user_domain],tag: tag_name) %>

 

But we can be 100% sure that where that public_tag URL will go by giving a quick tool to routes file, we support the optional parameter, and we also ease any possible refactor in the future as searching for every link would be much easier than it was (it took 2 days and some bugfixes to uniformize the codebase).

 

About the "might be mandatory" part, what I did was adding another before_filter by default in the base Admin-scoped ApplicationController, and then disable it (with skip_before_filter) on those controllers were was optional.
Wherever it is mandatory, if the user is logged in and belongs to an organization, the app will redirect him to the organization-based URL. But for things like APIs we keep both the old and new format to preserve compatibility with existing third-party applications and projects.

 

 

Overall, I don't blame Rails for being so easy to reduce code. I understand that for the majority of scenarios it really eases code and speeds up this configurations, so nobody could have guessed this routing changes... But what it is a good practice is to be consistent and, even if you have 5 ways of generating urls/links, decide on using just one that is flexible enough for hypothetical future changes.

 

Magic has a problem: to use it, you need to be a mage. Now we're a team of (mostly) non-Ruby experts that needs to build lots of features and we cannot rely (at least on a short term) on everybody having deep knowledge of Rals, so we'll instead go more traditional ways but ease the first steps with the platform.