Running Ollama on newer AMD graphic cards
If you are using AMD and want to run models locally, you have to install radeon driver PRO edition (it's still free).
If you are using AMD and want to run models locally, you have to install radeon driver PRO edition (it's still free).
Cypress is a powerful, open-source end-to-end testing framework designed for modern web applications. It enables developers to write, run, and debug tests for anything that runs in a browser. Here are the key aspects of Cypress:
Main Features:
Key Characteristics:
Use Cases:
Cypress has become popular among developers for its developer-friendly approach, rich debugging features, and ability to provide fast and reliable test results for modern web applications.
I recommend using vs code for going forward as it's a great environment for developing javascript projects.
The first step you are going to take when starting off with cypress is creating a separate project that contains just the cypress test suites for your main code.
To get started, create an empty folder, empty project, and install the cypress dependency.
mkdir test-cypress
cd test-cypress
npm init -y
npm install cypress --save-dev
At this point you have an empty project with a dependency. It's what you need, but it's not as useful as having a good starting point.
Simply run the command and all the files you need are going to be created. Cypress will do so because it recognises you are running it for the first time.
npx cypress open
Note: that on Mac OS you might get a widget saying that vs code doesn't have permission to run, you'll see this when you are opening the browser and you won't be able to click on the webpage.
On the webpage that opens up, you have the option to create some start up files. Those are great to do so if you are unfamiliar with cypress and you can learn a lot from them. For the purpose of the rest of this article I've created a separate project. We won't go through the files that cypress generates.
Cypress is used to automate the actions of a user on a webpage. They might interact with certain elements such as forms, links, buttons. They might also navigate to different pages and expect to see certain text on a page. They might login, logout, or register. They might upload a document or a picture. All these actions may store state on the server and set cookies on the client.
Your cypress tests should ideally check every interaction a user might take on the website. This way you are free to change the website, run the tests, and be confident that your users won't have issues when you deploy a new version of the website. It's meant to remove all the manual work that might otherwise need to be done.
Cypress is just one technology that is able to do this. Other frameworks such as Selenium and Playwright exist. People prefer Cypress due to the ease of use, visualisation of steps, large number of libraries and plugins that work with it, its popularity, and its ecosystem such as Cypress cloud.
If you want to follow along with the tutorial, I expect you to have Cypress open using the command npx cypress open
and be familiar with the UI page that opens test suites. You can find this in the navigation bar under the tab "Specs".
Once you create new files, they will automatically appear there.
If you open a file containing a test and you modify the test in your system, Cypress is going to automatically detect that and it will rerun your test.
Now that you know where your tests live, let's write a simple test in a file cypress/e2e/1-checkbox.cy.js
cy.visit("https://play.ioan.blog/")
cy.get('#form-checkbox #form-checkbox-element').check()
cy.get('#form-checkbox button').click()
cy.get('#form-checkbox').contains("Success!").should('be.visible')
I'm using a RPG TODO list that gets me going. Unfortunately, the project was a little bit slow when scrolling through my big list. Lucky for me, the project is open source so I was able to track the problem down and fix it myself.
There were many parts of the application that could've done this.. So I had to pull stuff out and see if everything is still slow.
Finally, I've found the error. Instead of showing text with TextView, Habitica is using a emoji library to display it. So instead of displaying the unicode for a smile, it actually replaces it with a bitmap image. I initially the lag on my phone came from the RecyclerView – because the bottleneck was when creating each of the sub views (for each task), but it turned out to be the rendering of the emojis. For each character in the text it would go through a huge decision making, even when no emojis were present in the text. For each letter it took about 1ms. So for a text such as "Daily – 10" it would have around 10ms delay. You can see the changes I've made here.
I'm surprised that it was such an easy fix.
I wanted a new template for my blog. And I got curious, what happened in this space for the past few years? There must have been some innovation, somewhere. And it indeed there has. WordPress came up with version 5, which many people seemed to dislike because the editor didn't fulfill the promised expectations. Is that all?
Headless CMSs started popping up. They promise they'll take care of the backend, you just have to call an API and you'll get the content from their database. I mean your database, just hosted on their end. Actually just some data somewhere. Point being one problem is taken care of. All you have to worry about is making a simple webapp that reads from this API, meshes it with some template, and then it displays it. Could be as simple as having two routes: one for your homepage, and one for /my-page-uri.
Headless CMSes are popular with the big guns out there. For example, Prismic is used by the likes of Netflix, Google, Deliveroo, DigitalOcean. They are fantastic for quick promotional campaigns or what looks like static websites. By that I mean, no comments, no ads, just a page. Again, you don't have to worry about setting up a database, how you're going to connect to it, security, permissions, etc.
It seemed interesting, so I started playing around with one, one of the few which are free. Either I was bought in and was willing to pay more, or I would've said you know what, it's not for me.
I was going to use nuxtjs as it does render content for web crawlers and that is important. Newer versions of Google bots do crawl webapps with dynamic frontend routing, but there are other crawlers out there and I bet they can more easily parse a website if it's simpler. I'd rather not trust a crawler to parse my dynamically showing links at the same time. What if there is a delay between a link showing and the other or browser had a bug. It wouldn't go that far on my website and pages would be missing from searches.
And away I went. I've set up prismic and they offer a free plan, and a bunch of ways to get started with them. I've set up the fields I want every page to have – which was a uri, a title, an image, and the content. Really easy to do.
I've almost finished porting over the design I've initially in WordPress – from its PHP files and the like – when I've found an issue. I was new to nuxtjs and all along I was ignorant of the fact a simple blog page was taking 400ms or more to show in my browser. I was ignoring it because I said it has to be something temporary. Maybe it's just how nuxtjs is when you are developing (I haven't had a lot of experience with it by then), and it wasn't that evident anyway. Until you are ready to push it to production and you find out that the WordPress was taking 10ms to deliver a fully loaded html page of my homepage from my host in Netherlands to UK, and the new blog I've built was taking between 450ms to 1600ms on localhost.
Well, that couldn't be ignored anymore. I started researching the issue and it's mentioned on their website, and I got a confirmation that it is, in fact, a problem they are working on.
And.. I was done with Prismic. For all it's worth, I prefer to know my blog was done on solid grounds. I could've used Cloudflare – there would be a fair bit of cost annually, but I liked WordPress. What's wrong with WordPress?
I've updated wordpress 5 on my blog and thankfully I didn't lose anything – please do a backup before this (there is a plugin for it). And I liked it. I've typed all of this on my phone for the first time. I don't like sitting and writing, I like to be on the go. You know us, millennials.
WordPress has an API endpoint at /wp-json and after a quick look, it had almost everything I've wanted to replace Prismic with. the posts are there, the pages are there. I think it even comes with WordPress 5 installed directly.
In the end, it was quite a bit of a ride. From wanting to uplift my WordPress template to using nuxtjs as my tool for rendering, keeping WordPress just in the backend (unless I need to write articles), and having a very dynamic website. It loads quickly wherever you click. I like that.
What do you think?
If you want to see how I've done it, it's all on github! ioandev/ioanblog
Plenty of examples and uses cases for wallaby can be found on their homepage, and also on their examples pages.
You can get a free trial for 14 days from here, and you can also do it from your IDE once you've installed the plugin.
Traversy Media has a great introduction for it, video here. Also jest's official docs if you prefer text.
Other useful links:
I've followed a 20+ hours course "Vue JS 2 – The Complete Guide (incl. Vue Router & Vuex)" on udemy. The author keeps updating it, and it's comprehensive.
This list will be updated with more resources over time, and I'll write a comprehensive nuxtjs tutorial explaining how it's used for running this blog. Subscribe to my newsletter for updates.
Businesses are now moving into the cloud for lower costs, lower maintanence, zero upfront cost & zero maintainability for hardware, better performance. The big dogs are AWS, Azure and Google cloud. Needless to say, Azure is preferred by Microsoft tech based businesses due to compatibility of languages, software, and easiness to stay in the same Eco system.
The days where you had to pay for an entire SQL server and the license upfront, maintaining unscalable monolithic applications, and worrying about the hardware not being able to handle your software are almost over. We're transitioning into serverless functions, renting servers if still needed, and non relational databases such as CosmosDB.
Azure provides many benefits out of the box. The sql servers can be replicated across their regions. They guarantee a high up time. The serverless functions mean that the API endpoints on your servers don't have to sit on the same box, or replicated across many boxes. Instead, they can sit separately as individual functions 'somewhere' in the cloud – you don't have to worry about it. If you get 1 million unexpected requests to a end point, where you usually get 10k, Azure is going to take care of that by running them 'somewhere'. You can still access your data in very similar ways. Rewriting the code does take some time to accommodate the serverless paradigm, but it's easy once the foundation is down. Where do you get the data from, how, what do you have to return, who do you let have access to that endpoint and how do you authenticate the users?
How many web servers we need to host the API endpoints? 1. Oh, look, we get more requests now. Start another one, and another one to funnel the traffic between the two other machines. Now we have one reverse proxy machine, and then 2 api servers. More traffic? Easy, bring up more machines. However, even if you have access to unlimited number of VMs they still cost a lot more than serverless functions, and they do need time to get spun up and taken down. You're paying for Windows licenses for those machines, and you might have to maintain the software. Just use serverless functions.
Serverless functions not only work for API endpoints, but also for processing queues. One such queuing service in Azure is called the service bus. If you're uploading a video on youtube, that video file is taken through a multitude of steps before being showing to the users. First, it's getting uploaded somewhere. Then, screenshots are taken for the thumbnail where you can choose which one you like most. Then, the video is converted to a smaller compressed size of the same resolution, as well as lower resolutions. These can be done in parallel, and they can be done with serverless functions. Recently azure container instances were introduced, with durable functions on their way.
To learn more, azure released serverless functions v2 out of beta not long ago. Go have a look.
https://azure.microsoft.com/en-us/blog/introducing-azure-functions-2-0/
I am unable to disclose much about this project. Mainly it is a GUI app written in C++ that is the main gate through which the user interacts with a professional coffee maker.
Challenges:
Everything hardware related was achieved by my partner.
The UI was comprised of the following: homepage, menu page, adding coffee/tea profiles page (certain periods of time at certain bar pressures), set the time, others.
A weekend project transformed into a 4-day project: Fortnite OCR Detector. OCR stands for Optical character recognition.
A fellow streamer needed a way to show his number of victories and kills for the day above his webcam in order to add something unique for his audience.
For this project, images are provided as input by taking screenshots of the entire monitor where the game resides (otherwise it's seen as a hack and you'll get banned), cutting the image and only getting the sections of the screen where the kills reside, as well as some other sections to detect when a victory was made and update the counters accordingly.
I wanted to keep things separate, thus the OpenCV library was used in a C++ shell app 'Detector' that was in charge of detecting the above said. Most of the time was spent here, changing and adapting images and numbers and algorithms to give the best accuracy for the numbers.
I then had a python process that was bringing many things together:
The python script was converted to windows executable using pyinstaller.
The application running the OpenCV detection algorithms and the number-crunching takes 0% GPU and <2% CPU on an Intel Core i7 4790 3.6GHz box.
Internship summer 2014. The project is made in Unity and it’s using a Kinect and a Gigabyte Brick.
Prototyping a Head-tracked stereo display board with horizontal parallax tracking so the display follows the viewer.
I've spent around 6 months of my internship working on a software project called LIMA, which purpose was to plan potential future hazards. I was in charge with managing the update thread, slowing and speeding things up (similar to a movie, 1x to 16x), making a timeline of events that happen at specified time intervals, making it possible to attach data to markers placed on the map – from text to images to PDFs, as well as open them inside the application itself.
The main stakeholders were the HSFRS (Humberside Fire & Rescue Service), although interest was shown from British Petroleum as well as TATA steel. I was involved in some of the demonstrations for these clients.
I was an intern in The Digital Centre, Hull from August 2016 to August 2017.
Every year I engage in a big project. This time I've partnered up with a YouTuber to bring about a Minecraft community having Youtubers and fans at it's core.
Before going ham in your project, think what's needed for the MVP and don't go way further than that in the beginning. All you need is the validation that the project is a success and the way to measure that success.
Requirements:
Process:
You've got a month now. Look out for:
By now you should have your MVP project ready to release, so do that. Expect a lot of bugs to be found in the last 5 minutes of the release and don't panic too much.
All of minedive ran in a vagrant. There was no reason to test it on windows.
minedive – This is by far the most important project I've ever worked on.
It involved: the main website, a forum, a bunch of Minecraft servers, a single dedicate capable of handling a lot of load, communication between the servers, website and forum, caching system, adapting to new technology, a large list of storage options: MySQL, PostgreSQL, Redis, and LDAP.
It also involved a careful analysis of what's development and what's production. It meant having different credentials and environments. It involved having me and my business partner work on certain stuff that were overlapping inside the project, such as: Configuring the Minecraft servers, making Minecraft plugins from scratch in Java, marketing and retention + inviting people over and keeping them interested – facebook, youtube. I was responsible with all of the website and server admin stuff, while my business partner was focused on the Minecraft servers and converting his youtube fans into players.
It also meant interaction with our users on a daily basis on the forum as well as the Discord server. A Facebook page was put in place and easily hit 300 likes in under two months: MineDiveRO/
Before the launch which was announced a month prior, we had 1000 people that went on the website and put their minecraft username and email in. On the date of the launch the discord was being spammed with messages and there was a nice hype about it.
The team comprised of other roles that we gave to the people who wanted to help.. They had to have access to a private server.
Managing a team of people, everyone with their own desire, was also a great experience.
Trello was at the backbones of the project management. Frequent Skype calls. Frequent slack chatting.
Puppet was the pipeline from debug to release. It took too much time to do the uploading manually every day, and although the process would be outlined in 7-10 steps, human error got in the way. A script was made to accomplish this with puppet.
Admin stuff: I've learned a lot about Unix. Such as the basic stuff: Unix files, user, permissions, user creation, installation of services, firewalls – ufw, uptime management – supervisor, flood issues – fail2ban, constant backups, logs. Basically any problem that could arise, it was mine to fix. Security posed a high threat so I got a consultent to help me with it.
From the progression of the small website before the launch that just had a counter of people that signed up, to the entire suite of apps, I've also created a small android app that gets notified by the website when another signed up was made. This was more of a distraction, but it should've been done later anyway.
Sengrid for sending e-mails.
Programming/Scripting languages: Python, Java, Javascript
Others: HTML5, CSS3,
Libraries: Tornado (websocket server), Paypal PPN, React.js for the website frontend
Login system: Apereo CAS – logging into the website, forum, and minecraft server with the same username & password.
Redis queue – queing messages between applications: website, forum, websocket server, 5 minecraft servers
Storage: LDAP, MySQL, PostgreSQL, Redis
Paypal PPN would contact the site and then the first minecraft server in the cluster to make sure the user has the ranking he bought instantaneously and the minecraft server itself didn't have to request such a list every minute. Scaling was a big factor in the making of this project.
Python was powering up the minedive website, the websocket server as well as the discord bot.
The forum used Misago, which is an open source python forum. I've had to develop the CAS script for logging in, which you can find here. I've also had to develop another front end for it as the one that it came with by default was not accommodating our needs – too much stuff in wrong place. It took a lot of guess work and refactoring of the templates both in python as well as the javascript. The javascript was also done with react, and it helped me to know from the website about this technology.
I’ve worked on a project for the past 2 weeks that features an e-commerce website selling shoes.