Combining PC and Nintendo Switch, I've spent hundreds of hours playing Diablo III. But sadly there is no cross-platform account support, so my progress in the console (way ahead of PC because I can and did play offline a lot) can't be merged. And this videogame is one of the few that are actually played best in console for two reasons: controls are really really good (so nice that I prefer them to mouse and keyboard), and the aforementioned offline play (PC requires persistent online connectivity even if you play alone).
I also love emulation, and recently the Switch emulation scene has advanced too much, at least regarding the yuzu emulator, that I decided to try it with Diablo III. It works so well that I set a small goal for myself: If I'm able to somehow hack or edit the game to "recover" my account experience levels at minimum, I'll stop playing it on the physical console and play via the emulator (complete with the Pro Controller and all, of which I have one).
My first approach was the easiest experiment, trying Cheat Engine. I attached it to the
Yuzu.exe process and selected "
All" as value type. I tried searching for money and was really trivial finding the correct memory address. Having confirmed that it can be used to alter the game, I then went seeking the jackpot: trying to change the experience (to quickly level up again my hundreds of paragon levels). A search yielded some results, but none of them were the real storage for experience, as they were reset back to the original value either after a while or after obtaining more experience points.
Now, how could experience levels work, if they are not the number of points you have? well, they might be what I call "inverted value" or "remainder value": not the experience you have but instead how much is missing for the next level. Each time you level up your current experience resets to 0, and you need an amount X, which decrements upon killing enemies and completing quests. If you search that decrementing X, very often will give you what you seek. For Diablo III, this was the key. You can find the actual amount inside the character inventory, at your character details, scrolling down on the list. Note that Cheat Engine will find 3 values (at least did in my case, in multiple tests), so play around changing one at a time until you find which one controls the actual XP value; Then set it to 1, freeze the memory address, and you're done: 1 kill -> 1 level increase (normal and then paragon).
A friend of mine found that there are savegame editors for the console versions. They are sometimes of no use unless you have a modded device as saves are encrypted or at least signed, but seems that at least in Diablo III's case people know how to circumvent encryption, so after making a backup of the savegame files I tried D3Edit.
It is a Python 3 script, but the GUI (despite pretty basic) requires tkinter, which is not in the base Python install, so if you use Linux do:
sudo apt-get install python3-tk
And if you use Windows, when installing Python ensure the
tcl/tk option checkbox is marked.
With that I can confirm that it works and allows a greater deal of edits: character and paragon levels, money, all resource types, and there's an inventory editor. The items editor also works, but is a bit unintuitive (up to the point that I thought editing items was broken). You first need to "add an item", choosing the rarity, type, subtype and number of affixes. Then, after saving/adding it, will appear on the left-most list, and once you click it, you will be able to modify the affixes. For many items, with the "safe mode" checkbox active I was still not able to add affixes to new items, so you should disable/uncheck it.
And also you can add other types of items, like pets (I had +15 on my account). Just have in mind that
x-y damage in weapons and
x-y armour for shields and gear are a must as by default every item has really low stats for damage/armour. And of course be careful to not spoil the entertainment, as after all the greatest fun (addiction?) of the game is finding better loot and constantly improving your character's equipment.
After this experiments, I've decided to switch to the emulated version (pun intended). I can play at minimum as well as in the real console and have room to explore alternatives if I get bored (maybe improve the item editor?). Plus I can go back to the PC version if I want some online multiplayer.
I always have Quake installed on my computer, it can easily be my all-times favourite videogame, but that doesn't means I always play it. After reading another book regarding the company that built it, I got interested on playing but also on tinkering with it in more technical ways.
Source code being a mostly C codebase, I'm lazy right now to start fiddling with that. The QuakeC subsystem that implements for modifications is a clear target for future experiments, but also not what I wanted at a first phase.
Instead, I attempted other two approaches: building external & auxiliary tools, or finding if somebody had ported the code to other development languages.
One of the best and still alive sites for obtaining Quake mods and maps is Quaddicted. They even offer a Java-based launcher, but for some reason didn't worked on my Linux computer, and I also didn't wanted to install Java on the gaming computer.
Instead, I gave a spin to the idea of a map selector, and build what my friends are calling a "Quake roulette", a random map launcher that, using Quaddicted's maps database, chooses and downloads one, installs it and launches the game. With a
--loop flag to run continuously more random maps until you get tired.
The experiment results is Quaddicted Random Map, a simple Python script. Still needs improvements as map
zip files are very chaotic, but in general most maps work (as in "get installed correctly") and has been tested in Linux, Mac and Windows.
The client side is a technical feat by itself because there are many cross-platform quake engines, like vkQuake or QuakeSpasm, but now you can use any recent browser instead to play the game (I tested Firefox and MS Edge), full with sounds and multiplayer.
I did a fork because the original repo seems abandoned; There are a few issues and PRs piling up, some of them important, like one that fixes normal clients (desktop clients implementing the original NetQuake protocol) not being able to connect to the WebQuake server. It also provided instructions for manual installation but I wanted to containerize it, so I added some Dockerfiles and ensured everything works both with the repo's client and external clients (in
sv_protocol 15 mode). And finally, as my friends and I start to use it I'll probably add tiny tweaks (like the already present basic
autoexec.cfg) and begin to diverge the code a bit from the original.
It's amazing the projects people can build out of love for certain games (or because of challenge? maybe both). If anybody told me in the late nineties that we'd be able to play Quake on a web browser, or even simply have a huge selection of custom maps freely available to get endless hours of fun, I'd think were be crazy.
Quick post with my gatherings of a small research about Microsoft 365 Substrate, also known as the Office 365 Substrate.
A brief description could be that Microsoft 365 Substrate is an "Intelligent Substrate Platform" in Office 365, applying AI to everything related with O365. Its goal is to store all the files and information (or a copy of the information) that users employ to create, collaborate, and communicate in the substrate.
It consists of storage & services to access the information. The services include Microsoft Graph, the main and preferred way of accessing the data. AI also builds insights from the substrate.
The substrate idea was initially part of Exchange's data store architecture, and in a great part relies on it. For example, all Teams substrate data goes to Exchange mailboxes (public or hidden).
I found the following links interesting to gather both that general idea and more details about the Substrate:
Project cortex analyses content across teams and platforms, to then organize & classify it and, most importantly, build a knowledge network with the relationships between the content, the people and information extracted. Heavily uses AI, in theory not having human intervention. It is used inside the 365 Substrate.
To learn more about Cortex I found the Knowledge and Content Services Blog of interest.
A resulting project of containing employee activity and collaboration data is clearly Microsoft Viva. You can read more about it here: https://www.microsoft.com/en-us/microsoft-365/blog/2021/02/04/microsoft-viva-empowering-every-employee-for-the-new-digital-age/
Substrate + others → MetaOS/Taos
The next logical step once you have the Substrate ready, is to join it with other pillars and try to provide richer and higher-level actions and insights. It will be a foundation over SharePoint, 365 substrate, Azure and Microsoft's machine-learning infrastructure. The specific usages of MetaOS are unknown, but a unified search across all Microsoft products represents a good example.
Further details: https://www.zdnet.com/article/what-is-microsofts-metaos/
I've always tried to squeeze out loading speed from my sites, as I use a cheap server to host my site but want it to be served and ready as quickly as possible. I've tried tools like guetzli to better compress JPEGs, but it is so slow that often I just didn't ran it if the image was already small, as the gains are negligible.
So with the aforementioned WebP support almost everywhere, I decided to update my small Pelican image plugin so it uses
<picture> markup to serve WebP for modern browsers, and still rely on the old JPEG/PNG original images for oldies.
Compressing all the images I got really nice results with a pretty standard configuration, between 25% and 35% size shrink with no apparent visual quality loss, sometimes going up to 50% reduction (mostly with PNGs).
Chatting about it at Twitter, I was told that WebP v2 is already under construction, and that it explores some ideas from AVIF . It got me interested, read about it, and watched this interesting talk by a Google engineer working on the new version, which shows not only how it works but also some cool tooling to test the different parameters that affect the compression.
You can also see other nice details, such as how the WebP2 triangle preview works in this Codepen, which reminded me of a former colleage's post about precisely doing that same idea using SVG instead: building a thumbnail using triangles so they weight very little. I recommend checking both the experiment and reading JM Perez's post.
 I thought about also adding AVIF to the blog, but the support is yet very limited, so I'll wait a bit more. And if in the meanwhile WebP2 comes out, probably go for that one instead.
Author(s): Kevlin Henney, Trisha Gee & others
Creating a very direct association with an existing book or line of books is a good marketing technique, but doesn't guarantees quality. This book does it, and kind of exemplifies why I sometimes prefer quality over quantity.
The book follows the same 97 Things Every Programmer Should Know pattern: 97 tips, each 2-3 pages long, from very varied topics, not only focused on Java itself. A sample of the topics included are language features, testing, documentation, pragmatism, JVM tuning & optimizations, architecture, tools, teamwork, CI, concurrency and parallelism, debugging, interop, Kotlin and Groovy...
Some of the content is overly generic... and in those cases, often not even Java specific, which is not bad on itself, but I wanted Java tips, I already read the "generic programmer" version book. And when I say "some" I mean as a rough estimate half of the tips. Also a few tips are so similar they feel repeated. And my final complaint is that some topics that appear to be really interesting (like the actor model, concurrency or Groovy) are so briefly touched that feel a mere mention more than a tip.
That said, the actual Java tips are good, at least to somebody like me with not much experience with the language. I took quite a few notes and got topics to now study in depth, so the main goal of of waking up my curiosity was achieved. But I would have preferred a "50 tips" version, all of them focused on the language.