I joined ticketea's engineering team last month, and apart from learning how things work and doing some bugfixing weeks (to get comfortable with the code and peek at some of the projects), I also got assigned to one of the new projects. There are three projects that we have started from scratch, allowing us to decide if to keep or change the current platform (which could be more automated). In order to take decisions, we did some research and proofs of concept.
The main goal of the research was to setup a basic AWS Elastic Beanstalk orchestation system, to allow us to perform deploys, local runs, etc. without needing to manually handle EC2 instances and build the corresponding toolset, as we don't have any systems team.
Our results are mixed but still subject to change as we haven't yet discarded or decided for a certain route, we keep exploring multiple paths with the projects to decide later. Despite that, I'll leave here some notes and references. Don't expect great notes as this is more of a cleanup of a worklog/checklist (actually, it was a simple Github issue).
Update: I wrote this blog post which might be of interest as shows how to access an EC2 service from a docker container running with Elastic Beanstalk: Securely access AWS Parameter Store from your Elastic Beanstalk Docker containers
We'll stick with CircleCI as our test runner, builder and probably continuous deployment tool for staging. Version 2.0 works nicely with containers and, despite being heavily modified from v1.0, modifications were quick to perform.
devbranch was easy to implement
EB has been relegated to staging/production deployment. For that, the cluster features (load balancing, rolling deploys, etcetera) are great and very easy to use. Instead, for local development it is between painful and directly impossible without hacks to work decently. The reasons are multiple, primarly being:
eb localworks only on pretty much factory-default scenarios. As soon as you start working on real services, it just doesn't works
productionetc. means one of the two following hacks:
dockerrun.aws.jsonwith placeholder variables that you replace by the appropiate enviroment values
dockerrun.aws.jsonat subfolders (one per environment) and move them via Makefile or similar to the root depending on where you run it
One of the teams, after asking for some feedback to friends and colleages is testing Terraform. It looks promising and is working fine for them but also needs maintenance, so there is no firm decision yet regarding if to use it or stick to Elastic Beanstalk and Makefiles (at least for now).
We setup a registry and pushed both development and production images after successful builds. It works quite nicely and the only reason we are not using them actively is to try to avoid the permissions hell you enter once you want to share images between different Amazon accounts (not just IAM users on the same account, but fully separate ones).
We are using Redis for our project, a docker image for development and Elasticache for staging and production.
When kartones.net was a blogging community and not my current personal minimalistic landing page, one of the blogs that my friend Lobo666 and I maintained was Uncut. With the change to BlogEngine.Net it kept working easily, with a combination of a WYSIWYG editor (or Windows Live Writer) and uploading post images via FTP (minor manual step). But when I recently moved everything to static sites, as Pelican not only doesn't provides any editor but forces you to build the site to preview the changes, my friend was quite impeded to keep posting at the blog.
On the other hand, I already had some post-processing scripts, to cleanup some files that were always copied to the output folder (and thus, uploaded to the site) and to do other tiny tasks like duplicating files (I want to maintain backwards compatibility with the original RSS feed addresses of the old blogs). They were ad-hoc, but after showing them to my friend he just asked me "if I could just make those scripts also upload automatically the modified files". And indeed, making some changes to pass by command line optionally some post identifier (I decided to use the slug) would help. As would too ease things just removing all the "full indexed pages" that Pelican builds (
index<zero-to-almost-infinite>.html pages), and just leaving 10 pages and a link to the full archives page:
This way, and removing the tags, categories and authors subfolders as I don't use them, the number of modified files to upload on a mere new blog post action is around a dozen, making it blazing fast to "deploy" with some Python code. In the end, generalizing the script for the three blogs that I still write and/or maintain, by specifying a few configuration parameters you can specify folders to create or delete, files to copy, remove, duplicate, truncating the index files... and of course upload a post or just build without uploading.
I don't want to extend myself much more as the utility of this tool is limited and very specific, getting to the point, I uploaded the script files to my Python assorted GitHub repo. The direct url of the publisher files is: https://github.com/Kartones/PythonAssorted/tree/master/pelican/publisher.
Usage is quite simple:
python3 publisher.py your-great-post-slug
And to only build:
And that's all. Until next time :)
As I recently switched job and took a few days of vacations in between, not much relevant to write about on the personal side, so another bunch of relevant articles I've read recently.
Had a bunch of links pending but past weeks have been quite busy. It's so sad that unethical and directly wrong company behaviours have been dominating the news ecosystem lately...
devbranch and then merge to
master) and it speeds up a lot the flow, especially if you do TDD or pair programming.
Note: I reported this technique to the company behind the game back at august 2015. Never got a reply.
I like videogames, so when I read about some "clickers" genre, I wanted to check how they played. Setting aside the mechanics themselves and my personal feelings regarding this kind of games, one title that caught my attention for being better than the average was Clicker Heroes. After playing a while it looked to me as if the difficulty curve was quite exponential, requiring either lots of patience or spending money at in-app purchases, so I wanted to confirm my suspicions.
Checking the game binaries I saw that there was a
HeroClicker.swf, so it was a Flash game. I've already peeked inside and even dissasembled SWFs with Sothink SWF Decompiler, so was my chosen tool.
I started peeking at the insides, and by mere luck I ended peeking the
ImportScreen class. It had a constant called
SALT just below the variable
_userData, so it caught my eye. I ran the game and saw that in the options you can export your data to an apparently encrypted TXT file, and then import it back... hacking my data was way more appealing, and by chance I had a possible attack vector with the import logic.
There was another constant with a maybe too descriptive name,
TEXT_SPLITTER, and scrolling down I found
fromAntiCheatFormat methods, performing MD5 hashes with the salt of contents retrieved from
The sprinkling or scattering algorithm was not hard to read:
And then when writing the content of the "encrypted" data to the file:
TEXT_SPLITTERconstant as it is
SALTconstant and write it
And of course, the inverse process to import the data.
I built a small tool to apply this algorithm using Ruby, and after saving/exporting my character, it did work and I had access to shiny data like the following:
"soulsSpent": "0", "primalSouls": "34152", "mercenaryCount": 5, "gold": "1.244277239155236e135", "baseClickDamage": 15, "transcendent": false, "epicHeroReceivedUpTo": 1660, "rubies": 7165,
Writing the inverse was easy, and confirmed me that everything worked fine.
This is how the end of an original encrypted file looks:
And this is how a (valid) file encrypted with my tool/script looks. As you can see instead of randoms I just enter blank spaces at odd positions:
The truth is that even cheating, the game gets to some insane levels that you have to either wait a lot or do level grinding (by restarting via "trascending"), so I got bored too quickly. As often happens, tweaking or hacking a game becomes more fun than playing the game itself.
You can get the tool (needs Ruby) from my GitHub and easily see a simplified version of the algorithm.
Even if the code were didn't had so obvious names, as the text splitter fragment can be easily spotted at the end of the file (Look for
Fe12NAfA3R6z4k0z at both screenshots), just doing a classic saved game deltas diff would have raised awareness (whole content of the file would have changed except for that fragment) and made me search the SWF for that splitter string.