Header 1

ioan

ioan biticu's website

Arch: CachyOS + Niri + Noctalia + Steam + OBS + Code

I've used Nobara for almost 6 months, and now I've transitioned to Niri. So far I can say that I don't miss it. Below is my guide on how to set it up including the apps I've used on Nobara.

Desktop 1 Desktop 2

First, install cachyos using the settings here.

Install yay and flatpak:

# Install required dependencies
sudo pacman -S --needed base-devel git

# Clone yay repository
cd ~
git clone https://aur.archlinux.org/yay.git

# Build and install yay
cd yay
makepkg -si

# Clean up
cd ..
rm -rf yay

sudo pacman -S flatpak

Install requirements:

  • sudo pacman -S bluez bluez-utils && sudo systemctl start bluetooth.service && sudo systemctl enable bluetooth.service
  • sudo pacman -S xdg-desktop-portal-gnome && gsettings set org.gnome.desktop.interface color-scheme 'prefer-dark' && gsettings set org.gnome.desktop.interface gtk-theme 'Adwaita-dark'
  • sudo pacman -S steam
  • sudo pacman -S discord
  • yay -S obsidian google-chrome
  • sudo pacman -S obs-studio
  • sudo pacman -S vlc
  • yay -S fsearch
  • yay -S visual-studio-code-bin
  • sudo pacman -S podman podman-compose
  • sudo pacman -S gnome-keyring ( & set it up in vscode: Configure Runtime Arguments and add "password-store":"gnome-libsecret" )
  • sudo pacman -S dbeaver
  • flatpak install flathub io.missioncenter.MissionCenter
  • sudo pacman -S filelight
  • yay -S github-desktop-bin
  • sudo pacman -S grim slurp
  • sudo pacman -S wl-clipboard
  • sudo pacman -S swappy
  • curl -fsSL https://tailscale.com/install.sh | sh

Noctalia changes:

Begin by reading the FAQ For Noctalia: https://docs.noctalia.dev/getting-started/faq/

  1. Add your wallpaper and profile picture.
  2. Add dock
  3. Set location
  4. Set opacity
  5. Change the size of the dock (in accordance with the value you put for struts in niri)
  6. Fix the icons by adding QT_QPA_PLATFORMTHEME=gtk3 to /etc/environment (as root)

Niri changes

Note: before you quit niri and log back in, run niri validate to make sure the config is still working, otherwise niri is going to start with a blank config.

Change your keyboard layout:

input {
    keyboard {
        xkb {
            layout "gb"
        }
        numlock
    }
}

Add your displays:

output "DP-1" {
    mode "3440x1440@165.001"
    position x=0 y=0
}

output "HDMI-A-1" {
    mode "1920x1080"
    position x=720 y=1440
}

You can find their info with the command niri msg outputs

Add key bindings


    // Core Noctalia binds
    Mod+Space { spawn "qs" "-c" "noctalia-shell" "ipc" "call" "launcher" "toggle"; }
    Mod+S { spawn "qs" "-c" "noctalia-shell" "ipc" "call" "controlCenter" "toggle"; }
    Mod+Comma { spawn "qs" "-c" "noctalia-shell" "ipc" "call" "settings" "toggle"; }
    
    // Audio controls
    XF86AudioRaiseVolume { spawn "qs" "-c" "noctalia-shell" "ipc" "call" "volume" "increase"; }
    XF86AudioLowerVolume { spawn "qs" "-c" "noctalia-shell" "ipc" "call" "volume" "decrease"; }
    XF86AudioMute { spawn "qs" "-c" "noctalia-shell" "ipc" "call" "volume" "muteOutput"; }
    
    // Brightness controls
    XF86MonBrightnessUp { spawn "qs" "-c" "noctalia-shell" "ipc" "call" "brightness" "increase"; }
    XF86MonBrightnessDown { spawn "qs" "-c" "noctalia-shell" "ipc" "call" "brightness" "decrease"; }

    // Screenshot region to clipboard
    Mod+Shift+S { spawn "sh" "-c" "grim -g \"$(slurp)\" - | wl-copy"; }
    
    // Screenshot region with editing (Swappy)
    Mod+Print { spawn "sh" "-c" "grim -g \"$(slurp)\" - | swappy -f -"; }

Make sure you remove previosu entries for Mod+Space, XF86AudioRaiseVolume, XF86AudioLowerVolume, XF86AudioMute, Mod+Print

First, make room for the dock at the bottom of the screen

layout {
    gaps 7
    
    struts {
        left 0
        right 0
        top 0
        bottom 40
    }
}

Startup script with windows in the right place

Note: to find chrome app, use: grep -il bitwarden ~/.local/share/applications/chrome-*.desktop

In my case I want the following:

First screen (large): Workspace 1:

  • Todoist (50% width) /home/ioan/.local/share/applications/chrome-knaiokfnmjjldlfhlioejgcompgenfhb-Default.desktop
  • Chrome ( 75% width)
  • Claude ( 75% width) /home/ioan/.local/share/applications/chrome-fmpnliohjhemenmnlpbfagaolkdacoja-Default.desktop

Workspace 2:

  • Vscode main projects (75% width) opens /home/ioan/Documents/repos/ps2mono with code)
  • Terminal allacrity (50% default)

Workspace 3:

  • Vscode secondary projects (75% width) (has code open /home/ioan/Documents/repos/mysql-downloader/)

Workspace 4:

  • SSH sessions running in vscode (flatcar project) (opens /home/ioan/Documents/repos/flatcar with code) (75% width)

Workspace 5:

  • Gaming + my app in another browser session chrome http://ps2immersion.com/ (game off by default) (35% width)

Secondary screen (small): Workspace 1:

  • Bitwarden (chrome app) (35% width) /home/ioan/.local/share/applications/chrome-fflifmfnonladkgkdehllhbcghakccgh-Default.desktop
  • Chrome (75% width)
  • Terminal allacrity (default) (50% width)

Workspace 2:

  • Discord (60% width)
  • Obsidian (60% width) /home/ioan/Documents/Obsidian/SaMearga

Find the script below.

Discord servers:

A whole lot of websockets

A whole load of websockets.

I've built a system that relies on 5 WebSocket servers. The API I am connecting to is a WebSocket server. I ingest the data coming from it on my own three WebSocket servers (each for a different type of data) and allow users of my websites to connect to a replicated WebSocket server for data. The browsers receive the data in less than 0.5 seconds, and yes, there are queues thrown in the mix for reliability: one (redis) queue for each of the connections to the websocket server feeding me data, and then two other (rabbitmq) queues between my 3 websocket servers and the final websocket server users connect to.

The end result? Two websites and a Discord bot. They allow users to be notified when players come online in the game PlanetSide 2, as well as add extra sounds on top of in-game events, such as killing an enemy or being revived. There is no problem if it takes more than a second for the notifications of players coming online, but there is a problem if you are on a killing spree and it takes more than 1 second to hear it - it should be almost instant.

There is another project in the works besides the websites and the Discord bot, and it's a cheating detector. The obvious cheaters in the game can make people rage quit, and it often takes some time for admins to do something about it. By gathering 100M+ game events every month, I will be able to come up with patterns and an app that quickly finds out who the obvious cheater is in the game, currently within 15-30 minutes of feeding it data.

Links to the projects:

A bit more about the project

PlanetSide 2 is an MMOFPS game, and it involves infantry, armoured tanks, and air battles. Although it's downloadable through Steam, it has its own independent friends list. Therefore, it's not easy to know when someone came online. I often play with around 4 people in small vehicles where someone has to drive and the other has to gun. I need to know when a gunner is available, and when I know they are I can join the game. Another use case is for large outfits. It is useful to know when the leaders are online and for players to join the game. Many players want to play with leadership and objectives in mind. And this is where the PlanetSide 2 tracker comes in.

There are other kill event streams that allow you to add custom sounds for game events; however, there is none that I could find that would work for Linux. Initially, I created my own local one in C# because I was inspired by a streamer - this was around 2 years ago. Since then, I decided to work on the cheating detection system. Since I had to save all the data anyway, it didn't take me long to add a website that would also stream the events in real time for what came out as PS2 immersion.

The architecture

I want to show the architecture behind the system.

For the architecture, I've decided to go all in and make it fault-tolerant and scalable. A total of three machines are used in a Docker Swarm. One is the master, and it hosts Traefik for load balancing requests. All machines host Redis instances for locking purposes. All three of the machines host MySQL (1 master, 2 slaves), and there are two machines hosting RabbitMQ (master / slave). The architecture chosen is an event-driven architecture. There are also various containers running for health checks, logging, graphs, and alerting (I get a phone call within 5 minutes if something goes wrong anywhere). The only container that is not replicated at the moment is the bot, but this will also be taken care of in a future version - it's just not in that many servers yet so anything like sharding is not worth it.

Logos

The logo you might not recognise in the image is Dozzle, a Docker container logs viewer. Although I use Loki and Promtail for log history, Dozzle has been great for looking at the most recent logs. Give it a try!

The servers' OS is Flatcar - it's an OS optimised for hosting containers. One thing that I didn't believe I would get used to is not having a package manager. I've just barely managed to install Tailscale because it doesn't need a package manager. Another feature it has is a toolbox - just inputting the command "toolbox" runs Fedora in a container, and you're free to install apps like htop in it.

Note that even though the architecture diagrams below contain a database, it is only accessible with the API containers, and the other containers use service clients to connect to that. The API doesn't connect to the database directly either; there is a load balancer called ProxySQL that routes writes to the master and reads to slaves.

Let's start with what actually gets the data into the system. The containers in the middle live on my home machine because my server IP hasn't been whitelisted yet. They connect to Census, the API for PlanetSide2, via WebSocket for Player activity (online/offline), XP (headshots, revives), and deaths (kills). The container in the last row in the list is making HTTP requests for other data: mapping player IDs from events to player names and getting friends lists. They also connect to the containers on the right via dedicated WebSocket clients (one path for each). Now that they are finally on my server, the events get published onto the queues.

Ps2 ingestors

Below are the listeners that publish events on queues. They arrive in dispatchers, which save the data in the database, and then get routed to the queues that the final Websocket server listens to. This is the websocket server that finally publishes the events to the users listening for them.

Internal listeners and dispatchers - Page 1 (5).png

Traefik binds both host names to the same WebSocket server. The web servers using NextJS are publishing static pages - they don't interact with anything downstream. That work is left for the WebSocket server. All the data needed is stored in localStorage, which is sent when connecting to the websocket server, and the custom songs for the game are stored in IndexedDB because it can store bigger sounds. This might've been overkill because the sounds you want to play when getting a kill are not that big anyway.

Internal listeners and dispatchers

Next, let's have a look at the bot.

Internal listeners and dispatchers

The bot also connects to the WebSocket server and the database. It only sends a message to the websocket server when it needs to check a player's name, and it may have to be fetched from the API (going all the way to my home machine).

The bot also allows users to get DMs when someone in their friends list is online.

Internal listeners and dispatchers

This is working as a cron job, working every hour - it queries the friends list for the Discord users in the database. Since it only gets their player IDs, it also makes queries for the player names where they are missing. This is done because they might not have logged in for a long time, and this system hasn't been online for that long. Next time they are online, they already exist in the system.

Some containers run in ACTIVE-ACTIVE state, e.g., the dispatchers, meaning they pull events from the queues. Some containers, e.g., the listeners, are in ACTIVE-PASSIVE state, meaning only one of them is accepting WebSocket connections. Traefik beautifully takes care of routing only to the active one. The containers running on my home machine are also replicated on a remote Raspberry Pi. This will only get elected if the primary stops receiving messages for some time, or it gets disconnected (for example, my internet is down).

Below are some images from Grafana Grafana

Grafana

Grafana

Conclusion

This is ps2tracker and ps2immersion. I didn't anticipate they would be the projects they are today, and I would learn so much by working on this. After 30-90 days of gathering data, I will be able to work on the cheater detection system. Until then, I will be working on refining the existing system. For example, I have a system discovery that automatically creates queues for the two WS instances, and I want to get rid of it.

For more info about the websites, as well as trying them yourself, please follow the links:

Cypress and its inner workings

Cypress and its inner workings

Cypress is a powerful, open-source end-to-end testing framework designed for modern web applications. It enables developers to write, run, and debug tests for anything that runs in a browser. Here are the key aspects of Cypress:

Main Features:

  • Real-time browser testing with automatic reloading
  • Easy debugging with detailed error messages and stack traces
  • Built-in time travel debugging with DOM snapshots
  • Automatic waiting for elements and API requests
  • Network traffic control and stubbing capabilities
  • Screenshots and video recording of test runs

Key Characteristics:

  • JavaScript-based testing (tests are written in JavaScript/TypeScript)
  • Runs directly in the browser alongside your application
  • Simple setup with minimal configuration
  • Comprehensive documentation and active community support
  • Cross-browser testing capabilities (Chrome, Firefox, Edge, Electron)

Use Cases:

  • End-to-end testing
  • Integration testing
  • Unit testing (though less common)
  • Visual regression testing
  • API testing

Cypress has become popular among developers for its developer-friendly approach, rich debugging features, and ability to provide fast and reliable test results for modern web applications.

How to create your own project with cypress

I recommend using vs code for going forward as it's a great environment for developing javascript projects.

The first step you are going to take when starting off with cypress is creating a separate project that contains just the cypress test suites for your main code.

To get started, create an empty folder, empty project, and install the cypress dependency.

mkdir test-cypress
cd test-cypress
npm init -y
npm install cypress --save-dev

At this point you have an empty project with a dependency. It's what you need, but it's not as useful as having a good starting point.

Scaffolding the Cypress project (template)

Simply run the command and all the files you need are going to be created. Cypress will do so because it recognises you are running it for the first time.

npx cypress open

Note: that on Mac OS you might get a widget saying that vs code doesn't have permission to run, you'll see this when you are opening the browser and you won't be able to click on the webpage.

On the webpage that opens up, you have the option to create some start up files. Those are great to do so if you are unfamiliar with cypress and you can learn a lot from them. For the purpose of the rest of this article I've created a separate project. We won't go through the files that cypress generates.

The basics of Cypress

Cypress is used to automate the actions of a user on a webpage. They might interact with certain elements such as forms, links, buttons. They might also navigate to different pages and expect to see certain text on a page. They might login, logout, or register. They might upload a document or a picture. All these actions may store state on the server and set cookies on the client.

Your cypress tests should ideally check every interaction a user might take on the website. This way you are free to change the website, run the tests, and be confident that your users won't have issues when you deploy a new version of the website. It's meant to remove all the manual work that might otherwise need to be done.

Cypress is just one technology that is able to do this. Other frameworks such as Selenium and Playwright exist. People prefer Cypress due to the ease of use, visualisation of steps, large number of libraries and plugins that work with it, its popularity, and its ecosystem such as Cypress cloud.

My expectation of you going forward

If you want to follow along with the tutorial, I expect you to have Cypress open using the command npx cypress open and be familiar with the UI page that opens test suites. You can find this in the navigation bar under the tab "Specs".

Once you create new files, they will automatically appear there.

If you open a file containing a test and you modify the test in your system, Cypress is going to automatically detect that and it will rerun your test.

Writing our first test

Now that you know where your tests live, let's write a simple test in a file cypress/e2e/1-checkbox.cy.js

cy.visit("https://play.ioan.blog/")
cy.get('#form-checkbox #form-checkbox-element').check()
cy.get('#form-checkbox button').click()
cy.get('#form-checkbox').contains("Success!").should('be.visible')

3 line optimisation in Habitica

I'm using a RPG TODO list that gets me going. Unfortunately, the project was a little bit slow when scrolling through my big list. Lucky for me, the project is open source so I was able to track the problem down and fix it myself.

There were many parts of the application that could've done this.. So I had to pull stuff out and see if everything is still slow.

Finally, I've found the error. Instead of showing text with TextView, Habitica is using a emoji library to display it. So instead of displaying the unicode for a smile, it actually replaces it with a bitmap image. I initially the lag on my phone came from the RecyclerView – because the bottleneck was when creating each of the sub views (for each task), but it turned out to be the rendering of the emojis. For each character in the text it would go through a huge decision making, even when no emojis were present in the text. For each letter it took about 1ms. So for a text such as "Daily – 10" it would have around 10ms delay. You can see the changes I've made here.

I'm surprised that it was such an easy fix.

My prismic experience

I wanted a new template for my blog. And I got curious, what happened in this space for the past few years? There must have been some innovation, somewhere. And it indeed there has. WordPress came up with version 5, which many people seemed to dislike because the editor didn't fulfill the promised expectations. Is that all?

Headless CMSs started popping up. They promise they'll take care of the backend, you just have to call an API and you'll get the content from their database. I mean your database, just hosted on their end. Actually just some data somewhere. Point being one problem is taken care of. All you have to worry about is making a simple webapp that reads from this API, meshes it with some template, and then it displays it. Could be as simple as having two routes: one for your homepage, and one for /my-page-uri.

Headless CMSes are popular with the big guns out there. For example, Prismic is used by the likes of Netflix, Google, Deliveroo, DigitalOcean. They are fantastic for quick promotional campaigns or what looks like static websites. By that I mean, no comments, no ads, just a page. Again, you don't have to worry about setting up a database, how you're going to connect to it, security, permissions, etc.

It seemed interesting, so I started playing around with one, one of the few which are free. Either I was bought in and was willing to pay more, or I would've said you know what, it's not for me.

I was going to use nuxtjs as it does render content for web crawlers and that is important. Newer versions of Google bots do crawl webapps with dynamic frontend routing, but there are other crawlers out there and I bet they can more easily parse a website if it's simpler. I'd rather not trust a crawler to parse my dynamically showing links at the same time. What if there is a delay between a link showing and the other or browser had a bug. It wouldn't go that far on my website and pages would be missing from searches.

And away I went. I've set up prismic and they offer a free plan, and a bunch of ways to get started with them. I've set up the fields I want every page to have – which was a uri, a title, an image, and the content. Really easy to do.

I've almost finished porting over the design I've initially in WordPress – from its PHP files and the like – when I've found an issue. I was new to nuxtjs and all along I was ignorant of the fact a simple blog page was taking 400ms or more to show in my browser. I was ignoring it because I said it has to be something temporary. Maybe it's just how nuxtjs is when you are developing (I haven't had a lot of experience with it by then), and it wasn't that evident anyway. Until you are ready to push it to production and you find out that the WordPress was taking 10ms to deliver a fully loaded html page of my homepage from my host in Netherlands to UK, and the new blog I've built was taking between 450ms to 1600ms on localhost.

Well, that couldn't be ignored anymore. I started researching the issue and it's mentioned on their website, and I got a confirmation that it is, in fact, a problem they are working on.

And.. I was done with Prismic. For all it's worth, I prefer to know my blog was done on solid grounds. I could've used Cloudflare – there would be a fair bit of cost annually, but I liked WordPress. What's wrong with WordPress?

I've updated wordpress 5 on my blog and thankfully I didn't lose anything – please do a backup before this (there is a plugin for it). And I liked it. I've typed all of this on my phone for the first time. I don't like sitting and writing, I like to be on the go. You know us, millennials.

WordPress has an API endpoint at /wp-json and after a quick look, it had almost everything I've wanted to replace Prismic with. the posts are there, the pages are there. I think it even comes with WordPress 5 installed directly.

In the end, it was quite a bit of a ride. From wanting to uplift my WordPress template to using nuxtjs as my tool for rendering, keeping WordPress just in the backend (unless I need to write articles), and having a very dynamic website. It loads quickly wherever you click. I like that.

What do you think?

If you want to see how I've done it, it's all on github! ioandev/ioanblog

Getting started with wallaby, jest, vue and nuxtjs

Wallaby

Plenty of examples and uses cases for wallaby can be found on their homepage, and also on their examples pages.

You can get a free trial for 14 days from here, and you can also do it from your IDE once you've installed the plugin.

Jest

Traversy Media has a great introduction for it, video here. Also jest's official docs if you prefer text.

Other useful links:

  1. mocking an entire class before it's used in your SUT
  2. checking if wallaby is running your code, or if jest is
  3. jest-extended – expands jest with more ways to assert
  4. WebSocket client mocking
  5. preventing console.logs from bubbling up to output during tests
  6. snapshot testing

Vue & Vuex

I've followed a 20+ hours course "Vue JS 2 – The Complete Guide (incl. Vue Router & Vuex)" on udemy. The author keeps updating it, and it's comprehensive.

Conclusion

This list will be updated with more resources over time, and I'll write a comprehensive nuxtjs tutorial explaining how it's used for running this blog. Subscribe to my newsletter for updates.

Azure – serverless functions

Businesses are now moving into the cloud for lower costs, lower maintanence, zero upfront cost & zero maintainability for hardware, better performance. The big dogs are AWS, Azure and Google cloud. Needless to say, Azure is preferred by Microsoft tech based businesses due to compatibility of languages, software, and easiness to stay in the same Eco system.

The days where you had to pay for an entire SQL server and the license upfront, maintaining unscalable monolithic applications, and worrying about the hardware not being able to handle your software are almost over. We're transitioning into serverless functions, renting servers if still needed, and non relational databases such as CosmosDB.

Azure provides many benefits out of the box. The sql servers can be replicated across their regions. They guarantee a high up time. The serverless functions mean that the API endpoints on your servers don't have to sit on the same box, or replicated across many boxes. Instead, they can sit separately as individual functions 'somewhere' in the cloud – you don't have to worry about it. If you get 1 million unexpected requests to a end point, where you usually get 10k, Azure is going to take care of that by running them 'somewhere'. You can still access your data in very similar ways. Rewriting the code does take some time to accommodate the serverless paradigm, but it's easy once the foundation is down. Where do you get the data from, how, what do you have to return, who do you let have access to that endpoint and how do you authenticate the users?

How many web servers we need to host the API endpoints? 1. Oh, look, we get more requests now. Start another one, and another one to funnel the traffic between the two other machines. Now we have one reverse proxy machine, and then 2 api servers. More traffic? Easy, bring up more machines. However, even if you have access to unlimited number of VMs they still cost a lot more than serverless functions, and they do need time to get spun up and taken down. You're paying for Windows licenses for those machines, and you might have to maintain the software. Just use serverless functions.

Serverless functions not only work for API endpoints, but also for processing queues. One such queuing service in Azure is called the service bus. If you're uploading a video on youtube, that video file is taken through a multitude of steps before being showing to the users. First, it's getting uploaded somewhere. Then, screenshots are taken for the thumbnail where you can choose which one you like most. Then, the video is converted to a smaller compressed size of the same resolution, as well as lower resolutions. These can be done in parallel, and they can be done with serverless functions. Recently azure container instances were introduced, with durable functions on their way.

To learn more, azure released serverless functions v2 out of beta not long ago. Go have a look.

https://azure.microsoft.com/en-us/blog/introducing-azure-functions-2-0/

Freelance project: Coffee Maker

I am unable to disclose much about this project. Mainly it is a GUI app written in C++ that is the main gate through which the user interacts with a professional coffee maker.

Challenges:

  1. make the C++ GUI application from scratch, drawing everything from a button to centered text to other images, etc.
  2. compile the C++ app for an inexpensive, small, not too capable, embedded IMX 6ul processor
  3. compile a Yocto Linux build that includes the dependencies of the C++ app as well as the application itself, and deploy.
  4. have a channel of communication between the c++ gui app and the other hardware pieces in the coffee maker.

Everything hardware related was achieved by my partner.

The UI was comprised of the following: homepage, menu page, adding coffee/tea profiles page (certain periods of time at certain bar pressures), set the time, others.

Freelance project: Fortnite OCR

A weekend project transformed into a 4-day project: Fortnite OCR Detector. OCR stands for Optical character recognition.

A fellow streamer needed a way to show his number of victories and kills for the day above his webcam in order to add something unique for his audience.

For this project, images are provided as input by taking screenshots of the entire monitor where the game resides (otherwise it's seen as a hack and you'll get banned), cutting the image and only getting the sections of the screen where the kills reside, as well as some other sections to detect when a victory was made and update the counters accordingly.

I wanted to keep things separate, thus the OpenCV library was used in a C++ shell app 'Detector' that was in charge of detecting the above said. Most of the time was spent here, changing and adapting images and numbers and algorithms to give the best accuracy for the numbers.

I then had a python process that was bringing many things together:

  1. instantiate a child process 'Detector', read data and update the counters.
  2. create a simple local HTTP web server that delivers an 'index.html' file at 127.0.0.1:5511, as well as the custom font at 127.0.0.1:5511/font.{ext}.
  3. Deliver the index.html file would also create a client to a local WebSocket server, and reconnect in case the connection is dropped. It has to be backwards compatible to IE6 so the streaming application OBS is capable of loading the local page, replace the colour 'white' with 'transparent', and display it on top of the game.
  4. create a simple WebSocket server that updates the clients (the browsers in this case) with the most up to date counters using Facebook's tornado
  5. create a window using the wxpython library, a cross-platform freeware GUI application maker.

The python script was converted to windows executable using pyinstaller.

The application running the OpenCV detection algorithms and the number-crunching takes 0% GPU and <2% CPU on an Intel Core i7 4790 3.6GHz box.

Head-tracked stereo display board

Internship summer 2014. The project is made in Unity and it’s using a Kinect and a Gigabyte Brick.

Prototyping a Head-tracked stereo display board with horizontal parallax tracking so the display follows the viewer.

Internship project: LIMA

I've spent around 6 months of my internship working on a software project called LIMA, which purpose was to plan potential future hazards. I was in charge with managing the update thread, slowing and speeding things up (similar to a movie, 1x to 16x), making a timeline of events that happen at specified time intervals, making it possible to attach data to markers placed on the map – from text to images to PDFs, as well as open them inside the application itself.

The main stakeholders were the HSFRS (Humberside Fire & Rescue Service), although interest was shown from British Petroleum as well as TATA steel. I was involved in some of the demonstrations for these clients.

I was an intern in The Digital Centre, Hull from August 2016 to August 2017.

minedive.com

Every year I engage in a big project. This time I've partnered up with a YouTuber to bring about a Minecraft community having Youtubers and fans at it's core.

Step 0: Minimum Viable Product

Before going ham in your project, think what's needed for the MVP and don't go way further than that in the beginning. All you need is the validation that the project is a success and the way to measure that success.

Step 1: Bring the hype

Requirements:

  • A fan base
  • A website where people can put in their name and email if they are interested in the project. Add a countdown to it.

Process:

  1. Make a youtube video briefly discussion the video 1-3 minutes long
  2. Make a discord server where people can join and discuss more about the server
  3. Make a youtube page where people can like
  4. Release the video

Step 2: Test if the idea is viable

You've got a month now. Look out for:

  1. How many people are interested. Count the number of likes on all platforms and how many people put their e-mail address on your website.
  2. The number of true fans that are by your side.
  3. How many people are helping you and giving you feedback.

Step 3: Release the product

By now you should have your MVP project ready to release, so do that. Expect a lot of bugs to be found in the last 5 minutes of the release and don't panic too much.

Minedive.com

All of minedive ran in a vagrant. There was no reason to test it on windows.

minedive – This is by far the most important project I've ever worked on.

It involved: the main website, a forum, a bunch of Minecraft servers, a single dedicate capable of handling a lot of load, communication between the servers, website and forum, caching system, adapting to new technology, a large list of storage options: MySQL, PostgreSQL, Redis, and LDAP.

It also involved a careful analysis of what's development and what's production. It meant having different credentials and environments. It involved having me and my business partner work on certain stuff that were overlapping inside the project, such as: Configuring the Minecraft servers, making Minecraft plugins from scratch in Java, marketing and retention + inviting people over and keeping them interested – facebook, youtube. I was responsible with all of the website and server admin stuff, while my business partner was focused on the Minecraft servers and converting his youtube fans into players.

It also meant interaction with our users on a daily basis on the forum as well as the Discord server. A Facebook page was put in place and easily hit 300 likes in under two months: MineDiveRO/

Before the launch which was announced a month prior, we had 1000 people that went on the website and put their minecraft username and email in. On the date of the launch the discord was being spammed with messages and there was a nice hype about it.

The team comprised of other roles that we gave to the people who wanted to help.. They had to have access to a private server.

Managing a team of people, everyone with their own desire, was also a great experience.

Trello was at the backbones of the project management. Frequent Skype calls. Frequent slack chatting.

Puppet was the pipeline from debug to release. It took too much time to do the uploading manually every day, and although the process would be outlined in 7-10 steps, human error got in the way. A script was made to accomplish this with puppet.

Admin stuff: I've learned a lot about Unix. Such as the basic stuff: Unix files, user, permissions, user creation, installation of services, firewalls – ufw, uptime management – supervisor, flood issues – fail2ban, constant backups, logs. Basically any problem that could arise, it was mine to fix. Security posed a high threat so I got a consultent to help me with it.

From the progression of the small website before the launch that just had a counter of people that signed up, to the entire suite of apps, I've also created a small android app that gets notified by the website when another signed up was made. This was more of a distraction, but it should've been done later anyway.

Sengrid for sending e-mails.
Programming/Scripting languages: Python, Java, Javascript
Others: HTML5, CSS3,
Libraries: Tornado (websocket server), Paypal PPN, React.js for the website frontend
Login system: Apereo CAS – logging into the website, forum, and minecraft server with the same username & password.
Redis queue – queing messages between applications: website, forum, websocket server, 5 minecraft servers
Storage: LDAP, MySQL, PostgreSQL, Redis

Paypal PPN would contact the site and then the first minecraft server in the cluster to make sure the user has the ranking he bought instantaneously and the minecraft server itself didn't have to request such a list every minute. Scaling was a big factor in the making of this project.

Python was powering up the minedive website, the websocket server as well as the discord bot.
The forum used Misago, which is an open source python forum. I've had to develop the CAS script for logging in, which you can find here. I've also had to develop another front end for it as the one that it came with by default was not accommodating our needs – too much stuff in wrong place. It took a lot of guess work and refactoring of the templates both in python as well as the javascript. The javascript was also done with react, and it helped me to know from the website about this technology.