Getting started with wallaby, jest, vue and nuxtjs

<p>Vue popularity</p>


Plenty of examples and uses cases for wallaby can be found on their homepage, and also on their examples pages.

You can get a free trial for 14 days from here, and you can also do it from your IDE once you’ve installed the plugin.


Traversy Media has a great introduction for it, video here. Also jest’s official docs if you prefer text.

Other useful links:

  1. mocking an entire class before it’s used in your SUT
  2. checking if wallaby is running your code, or if jest is
  3. jest-extended – expands jest with more ways to assert
  4. WebSocket client mocking
  5. preventing console.logs from bubbling up to output during tests
  6. snapshot testing

Vue & Vuex

I’ve followed a 20+ hours course “Vue JS 2 – The Complete Guide (incl. Vue Router & Vuex)” on udemy. The author keeps updating it, and it’s comprehensive.


This list will be updated with more resources over time, and I’ll write a comprehensive nuxtjs tutorial explaining how it’s used for running this blog. Subscribe to my newsletter for updates.

My prismic experience

I wanted a new template for my blog. And I got curious, what happened in this space for the past few years? There must have been some innovation, somewhere. And it indeed there has. WordPress came up with version 5, which many people seemed to dislike because the editor didn’t fulfill the promised expectations. Is that all?

Headless CMSs started popping up. They promise they’ll take care of the backend, you just have to call an API and you’ll get the content from their database. I mean your database, just hosted on their end. Actually just some data somewhere. Point being one problem is taken care of. All you have to worry about is making a simple webapp that reads from this API, meshes it with some template, and then it displays it. Could be as simple as having two routes: one for your homepage, and one for /my-page-uri.

Headless CMSes are popular with the big guns out there. For example, Prismic is used by the likes of Netflix, Google, Deliveroo, DigitalOcean. They are fantastic for quick promotional campaigns or what looks like static websites. By that I mean, no comments, no ads, just a page. Again, you don’t have to worry about setting up a database, how you’re going to connect to it, security, permissions, etc.

It seemed interesting, so I started playing around with one, one of the few which are free. Either I was bought in and was willing to pay more, or I would’ve said you know what, it’s not for me.

I was going to use nuxtjs as it does render content for web crawlers and that is important. Newer versions of Google bots do crawl webapps with dynamic frontend routing, but there are other crawlers out there and I bet they can more easily parse a website if it’s simpler. I’d rather not trust a crawler to parse my dynamically showing links at the same time. What if there is a delay between a link showing and the other or browser had a bug. It wouldn’t go that far on my website and pages would be missing from searches.

And away I went. I’ve set up prismic and they offer a free plan, and a bunch of ways to get started with them. I’ve set up the fields I want every page to have – which was a uri, a title, an image, and the content. Really easy to do.

I’ve almost finished porting over the design I’ve initially in WordPress – from its PHP files and the like – when I’ve found an issue. I was new to nuxtjs and all along I was ignorant of the fact a simple blog page was taking 400ms or more to show in my browser. I was ignoring it because I said it has to be something temporary. Maybe it’s just how nuxtjs is when you are developing (I haven’t had a lot of experience with it by then), and it wasn’t that evident anyway. Until you are ready to push it to production and you find out that the WordPress was taking 10ms to deliver a fully loaded html page of my homepage from my host in Netherlands to UK, and the new blog I’ve built was taking between 450ms to 1600ms on localhost.

Well, that couldn’t be ignored anymore. I started researching the issue and it’s mentioned on their website, and I got a confirmation that it is, in fact, a problem they are working on.

And.. I was done with Prismic. For all it’s worth, I prefer to know my blog was done on solid grounds. I could’ve used Cloudflare – there would be a fair bit of cost annually, but I liked WordPress. What’s wrong with WordPress?

I’ve updated wordpress 5 on my blog and thankfully I didn’t lose anything – please do a backup before this (there is a plugin for it). And I liked it. I’ve typed all of this on my phone for the first time. I don’t like sitting and writing, I like to be on the go. You know us, millennials.

WordPress has an API endpoint at /wp-json and after a quick look, it had almost everything I’ve wanted to replace Prismic with. the posts are there, the pages are there. I think it even comes with WordPress 5 installed directly.

In the end, it was quite a bit of a ride. From wanting to uplift my WordPress template to using nuxtjs as my tool for rendering, keeping WordPress just in the backend (unless I need to write articles), and having a very dynamic website. It loads quickly wherever you click. I like that.

What do you think?

If you want to see how I’ve done it, it’s all on github! ioandev/ioanblog

How do I delete stale git branches?

We had over 100 branches that needed to be deleted. Here is the script.

You can open the output in Excel and add some formatting on the keywords month and year if you want to share the output file around. Otherwise just run it!

PS: Extra points if you name the file git-remove-branches and put in %HOME/bin. You can then do “git remove-branches” in whatever repo you are.

Azure – serverless functions

Businesses are now moving into the cloud for lower costs, lower maintanence, zero upfront cost & zero maintainability for hardware, better performance. The big dogs are AWS, Azure and Google cloud. Needless to say, Azure is preferred by Microsoft tech based businesses due to compatibility of languages, software, and easiness to stay in the same Eco system.

The days where you had to pay for an entire SQL server and the license upfront, maintaining unscalable monolithic applications, and worrying about the hardware not being able to handle your software are almost over. We’re transitioning into serverless functions, renting servers if still needed, and non relational databases such as CosmosDB.

Azure provides many benefits out of the box. The sql servers can be replicated across their regions. They guarantee a high up time. The serverless functions mean that the API endpoints on your servers don’t have to sit on the same box, or replicated across many boxes. Instead, they can sit separately as individual functions ‘somewhere’ in the cloud – you don’t have to worry about it. If you get 1 million unexpected requests to a end point, where you usually get 10k, Azure is going to take care of that by running them ‘somewhere’. You can still access your data in very similar ways. Rewriting the code does take some time to accommodate the serverless paradigm, but it’s easy once the foundation is down. Where do you get the data from, how, what do you have to return, who do you let have access to that endpoint and how do you authenticate the users?

How many web servers we need to host the API endpoints? 1. Oh, look, we get more requests now. Start another one, and another one to funnel the traffic between the two other machines. Now we have one reverse proxy machine, and then 2 api servers. More traffic? Easy, bring up more machines. However, even if you have access to unlimited number of VMs they still cost a lot more than serverless functions, and they do need time to get spun up and taken down. You’re paying for Windows licenses for those machines, and you might have to maintain the software. Just use serverless functions.

Serverless functions not only work for API endpoints, but also for processing queues. One such queuing service in Azure is called the service bus. If you’re uploading a video on youtube, that video file is taken through a multitude of steps before being showing to the users. First, it’s getting uploaded somewhere. Then, screenshots are taken for the thumbnail where you can choose which one you like most. Then, the video is converted to a smaller compressed size of the same resolution, as well as lower resolutions. These can be done in parallel, and they can be done with serverless functions. Recently azure container instances were introduced, with durable functions on their way.

To learn more, azure released serverless functions v2 out of beta not long ago. Go have a look.

Freelance project: Coffee Maker

I am unable to disclose much about this project. Mainly it is a GUI app written in C++ that is the main gate through which the user interacts with a professional coffee maker.


  1. make the C++ GUI application from scratch, drawing everything from a button to centered text to other images, etc.
  2. compile the C++ app for an inexpensive, small, not too capable, embedded IMX 6ul processor 
  3. compile a Yocto Linux build that includes the dependencies of the C++ app as well as the application itself, and deploy.
  4. have a channel of communication between the c++ gui app and the other hardware pieces in the coffee maker.

Everything hardware related was achieved by my partner.

The UI was comprised of the following: homepage, menu page, adding coffee/tea profiles page (certain periods of time at certain bar pressures), set the time, others.

Freelance project: Fortnite OCR

A weekend project transformed into a 4-day project: Fortnite OCR Detector. OCR stands for Optical character recognition.

A fellow streamer needed a way to show his number of victories and kills for the day above his webcam in order to add something unique for his audience.

For this project, images are provided as input by taking screenshots of the entire monitor where the game resides (otherwise it’s seen as a hack and you’ll get banned), cutting the image and only getting the sections of the screen where the kills reside, as well as some other sections to detect when a victory was made and update the counters accordingly.

I wanted to keep things separate, thus the OpenCV library was used in a C++ shell app ‘Detector’ that was in charge of detecting the above said. Most of the time was spent here, changing and adapting images and numbers and algorithms to give the best accuracy for the numbers.

I then had a python process that was bringing many things together:

  1. instantiate a child process ‘Detector’, read data and update the counters.
  2. create a simple local HTTP web server that delivers an ‘index.html’ file at, as well as the custom font at{ext}.
  3. Deliver the index.html file would also create a client to a local WebSocket server, and reconnect in case the connection is dropped. It has to be backwards compatible to IE6 so the streaming application OBS is capable of loading the local page, replace the colour ‘white’ with ‘transparent’, and display it on top of the game.
  4. create a simple WebSocket server that updates the clients (the browsers in this case) with the most up to date counters using Facebook’s tornado
  5. create a window using the wxpython library, a cross-platform freeware GUI application maker.

The python script was converted to windows executable using pyinstaller.

The application running the OpenCV detection algorithms and the number-crunching takes 0% GPU and <2% CPU on an Intel Core i7 4790 3.6GHz box.

Internship project: LIMA

I’ve spent around 6 months of my internship working on a software project called LIMA, which purpose was to plan potential future hazards. I was in charge with managing the update thread, slowing and speeding things up (similar to a movie, 1x to 16x), making a timeline of events that happen at specified time intervals, making it possible to attach data to markers placed on the map – from text to images to PDFs, as well as open them inside the application itself.

The main stakeholders were the HSFRS (Humberside Fire & Rescue Service), although interest was shown from British Petroleum as well as TATA steel. I was involved in some of the demonstrations for these clients.

I was an intern in The Digital Centre, Hull from August 2016 to August 2017.

python vlc callback inside class

I’ve found it hard to find a solution for having a callback method inside a class that you are using. Here’s an example how to do it though.

Head-tracked stereo display board

Internship summer 2014. The project is made in Unity and it’s using a Kinect and a Gigabyte Brick.

Prototyping a Head-tracked stereo display board with horizontal parallax tracking so the display follows the viewer. More details here.

Every year I engage in a big project. This time I’ve partnered up with a YouTuber to bring about a Minecraft community having Youtubers and fans at it’s core.

Step 0: Minimum Viable Product

Before going ham in your project, think what’s needed for the MVP and don’t go way further than that in the beginning. All you need is the validation that the project is a success and the way to measure that success.

Step 1: Bring the hype


  • A fan base
  • A website where people can put in their name and email if they are interested in the project. Add a countdown to it.


  1. Make a youtube video briefly discussion the video 1-3 minutes long
  2. Make a discord server where people can join and discuss more about the server
  3. Make a youtube page where people can like
  4. Release the video

Step 2: Test if the idea is viable

You’ve got a month now. Look out for:

  1. How many people are interested. Count the number of likes on all platforms and how many people put their e-mail address on your website.
  2. The number of true fans that are by your side.
  3. How many people are helping you and giving you feedback.

Step 3: Release the product

By now you should have your MVP project ready to release, so do that. Expect a lot of bugs to be found in the last 5 minutes of the release and don’t panic too much.

All of minedive ran in a vagrant. There was no reason to test it on windows.

minedive – This is by far the most important project I’ve ever worked on.

It involved: the main website, a forum, a bunch of Minecraft servers, a single dedicate capable of handling a lot of load, communication between the servers, website and forum, caching system, adapting to new technology, a large list of storage options: MySQL, PostgreSQL, Redis, and LDAP.

It also involved a careful analysis of what’s development and what’s production. It meant having different credentials and environments. It involved having me and my business partner work on certain stuff that were overlapping inside the project, such as: Configuring the Minecraft servers, making Minecraft plugins from scratch in Java, marketing and retention + inviting people over and keeping them interested – facebook, youtube. I was responsible with all of the website and server admin stuff, while my business partner was focused on the Minecraft servers and converting his youtube fans into players.

It also meant interaction with our users on a daily basis on the forum as well as the Discord server. A Facebook page was put in place and easily hit 300 likes in under two months: MineDiveRO/

Before the launch which was announced a month prior, we had 1000 people that went on the website and put their minecraft username and email in. On the date of the launch the discord was being spammed with messages and there was a nice hype about it.

The team comprised of other roles that we gave to the people who wanted to help.. They had to have access to a private server.

Managing a team of people, everyone with their own desire, was also a great experience.

Trello was at the backbones of the project management. Frequent Skype calls. Frequent slack chatting.

Puppet was the pipeline from debug to release. It took too much time to do the uploading manually every day, and although the process would be outlined in 7-10 steps, human error got in the way. A script was made to accomplish this with puppet.

Admin stuff: I’ve learned a lot about Unix. Such as the basic stuff: Unix files, user, permissions, user creation, installation of services, firewalls – ufw, uptime management – supervisor, flood issues – fail2ban, constant backups, logs. Basically any problem that could arise, it was mine to fix. Security posed a high threat so I got a consultent to help me with it.

From the progression of the small website before the launch that just had a counter of people that signed up, to the entire suite of apps, I’ve also created a small android app that gets notified by the website when another signed up was made. This was more of a distraction, but it should’ve been done later anyway.

Sengrid for sending e-mails.
Programming/Scripting languages: Python, Java, Javascript
Others: HTML5, CSS3,
Libraries: Tornado (websocket server), Paypal PPN, React.js for the website frontend
Login system:Apereo CAS – logging into the website, forum, and minecraft server with the same username &amp; password.
Redis queue – queing messages between applications: website, forum, websocket server, 5 minecraft servers
Storage: LDAP, MySQL, PostgreSQL, Redis

Paypal PPN would contact the site and then the first minecraft server in the cluster to make sure the user has the ranking he bought instantaneously and the minecraft server itself didn’t have to request such a list every minute. Scaling was a big factor in the making of this project.

Python was powering up the minedive website, the websocket server as well as the discord bot.
The forum used Misago, which is an open source python forum. I’ve had to develop the CAS script for logging in, which you can find here. I’ve also had to develop another front end for it as the one that it came with by default was not accommodating our needs – too much stuff in wrong place. It took a lot of guess work and refactoring of the templates both in python as well as the javascript. The javascript was also done with react, and it helped me to know from the website about this technology.

3 line optimisation in Habitica

I’m using a RPG TODO list that gets me going. Unfortunately, the project was a little bit slow when scrolling through my big list. Lucky for me, the project is open source so I was able to track the problem down and fix it myself.

There were many parts of the application that could’ve done this.. So I had to pull stuff out and see if everything is still slow.

Finally, I’ve found the error. Instead of showing text with TextView, Habitica is using a emoji library to display it. So instead of displaying the unicode for a smile, it actually replaces it with a bitmap image. I initially the lag on my phone came from the RecyclerView – because the bottleneck was when creating each of the sub views (for each task), but it turned out to be the rendering of the emojis. For each character in the text it would go through a huge decision making, even when no emojis were present in the text.
For each letter it took about 1ms. So for a text such as “Daily – 10” it would have around 10ms delay. You can see the changes I’ve made here.

I’m surprised that it was such an easy fix.