Author Archives: V

software testing

I’ve been coding since around 2017, when I took my first Computer Science course in high school. Since then, I’ve worked on plenty of school projects and a couple of personal projects as well. Software testing had never been a thing I really bothered with meaningfully in this timespan, which is kind of fair considering that most of these projects and assignments weren’t really too complex. I would just print out results for each bit of code as I went along to make sure that the code I just wrote actually works, but I didn’t really go and write actual automated tests.

This semester, I had some exposure to standardized software testing when I worked on a homework assignment in my Software Design and Construction course that required me to adjust some code along with the tests that make sure the code works as intended with intended output. At first, I was sort of confused considering I had never worked with test files, but I appreciated the streamlined nature of this testing. We also had to read up on some software testing methods for a Scrum developer homework assignment we did for this class, Software Process Management.

Today, I wanted to research some software testing basics, and came across a post on IBM’s blog. The post goes over some reasons to software test (the importance of quality control for businesses, what testing even is, etc.) along with types of software tests and practices.

The different types of software tests are fairly easy to understand, and I’ve come across all of them in some capacity before. Acceptance tests ensure the system as a whole functions as intended, integration tests ensure that software components work as intended, and unit tests ensure the smallest testable snippets work as intended. Performance tests evaluate how well software works on different hardware, and stress tests check how much load the software can take without breaking. Regression tests see if new features harm the software at large, and usability tests evaluate how effective the software is for end-users.

Some good testing practices listed in the article are continuous testing, bug tracking, and service virtualization. Continuous testing is the practice of testing each build as soon as it’s available, fairly simple. Bug tracking is, well, bug tracking. The post mentions automated monitoring tools for tracking defects, but I sort of think of this in the sense of keeping an issues list on a git repository, as that’s what I’m more familiar with. Service virtualization is a little more complicated, it simulates functionality that hasn’t been implemented yet to reduce dependencies and test sooner.

What I’m mostly interested in is the applications of these practices, and I’ll likely look into it further at a later time. I understand these concepts fairly well, but the idea of automating tests, at least at my current understanding of it, sounds a bit daunting. I’m sure it saves a lot of time on larger-scale projects, where testing each piece of your code manually for multiple cases will take a lot of time. I’m interested to see what this looks like in an example.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.


During our Software Construction, Design and Architecture class, we’ve gone over a multitude of different design techniques, patterns and tools to ensure that we write quality code. We’ve also touched on some design / code smells and the concept of technical debt. While design smells can give you an indication of something that might be wrong with your code, whether in a small section or in the overall design, anti-patterns represent a much more obvious, prevalent problem in your code.

Some of the most common examples of anti-patterns I’ve heard in conversations I’ve had with my friends, classmates and even teachers in a joking matter, like spaghetti code, and copying and pasting code snippets from random repos (which isn’t just bad practice for coding, it’d also be possible to infringe upon licenses or the lack thereof placed on that code). I think the reason why these are fairly common among the circles I’ve been in is just because everyone has done it before for a school assignment or something, and it’s just funny to joke about.

Some anti-patterns are a bit more involved with designing software in a meaningful sense, though. In a blog post from Lucidchart, some other anti-patterns found in software development and design are golden hammers, god objects, boat anchors and dead code. What’s interesting about these is that they actually are actual manifestations of design smells, in the most obvious ways.

For example, a boat anchor is a piece of code that isn’t being used yet but might be used in a future version, maybe. This results in needless complexity, an addition that isn’t necessary and makes it harder to read your code. Dead code is similar, code that gets executed but isn’t even used in the output of a program, causing immobility. Using a golden hammer means you are using the same tool for a bunch of different jobs, thinking that it is the best tool for all cases. This contributes to fragility, needless repetition and opacity depending on the implementation one goes about with a golden hammer. God objects are similar in a sense, as they’re objects that do too much and are responsible for too many functions. This violates encapsulation principles that are good practice in software design, while also resulting in immobility, viscosity and possibly even more. Dead code

This reinforces the idea of checking for code smells and refactoring code as needed, because otherwise you will build upon a foundation that was fundamentally problematic, and the rest of your code will reflect it no matter how much better it is. This all ties into technical debt, and brings all of the software design principles, concepts and ideas together into a picture that says: please maintain your code cleanly!

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

copyright licenses

We’ve covered software licenses a fair amount in our Software Process Management course. The MIT license, Apache License, and GNU GPL were the most notable for different reasons, but I figure that there are far more licenses that we haven’t necessarily gone over directly in class. I wanted to take the time to familiarize myself with some of these licenses.

Starting off with a CopyLeft license (a type of license I find myself fairly sympathetic to for the sake of keeping software open source), the Mozilla Public License, or MPL, is a weak sort of CopyLeft. According to this post on, this means that users must disclose changes, but for a much narrower set of code for each change, file-based for short. When compared to the LGPL, a notable weak copyleft license, the MPL allows users to sublicense as an additional permission. Notable uses of this license are, of course, Firefox and Thunderbird, along with LibreOffice. I do like the idea of having a middle ground between being completely permissive and also wanting to “spread” open-source.

BSD Licenses are permissive licenses that generally have no restrictions. This license seems to fall in line with the big permissive licenses (like MIT) and doesn’t have a viral effect, but requires a copy of the license and a disclaimer of liability. According to a post on AppMaster, the differences between the MIT licenses and the BSD licenses is there being no requirement for a liability disclaimer for the MIT license, no requirement for patent protection on the BSD license, the BSD license being compatible with GPL, and the MIT license requiring a permission notice. I don’t like the lack of patent protection with this one, I know that’s been a fairly annoying issue in the past for various projects.

Lastly, I took a look into the OpenSSL license, as it was on the front page of tldrLegal. Funnily enough, it’s a license styled off of the BSD license, with the express purpose of being used with the OpenSSL project. On the aforementioned tldrLegal page for the license, it mentions in the summary that the license has a special advertising clause, which means that ads mentioning the software in any way must acknowledge the use of OpenSSL. The takeaway, for me, is that if you know exactly what you need with your license, and one that exists has everything you’d like except for one specific requirement or permission, you can build off of that license with that added requirement or permission (assuming that you have a legal expert of some sort write it, I’m assuming).

Overall, I really appreciate the work that has been done to ensure that we have a variety of different ways to license our software depending on what we want done with it. It’s definitely an important aspect of open-source development, and having the resources to find licenses that are closest to the project’s vision is an excellent way to focus on the development and not have to worry about the legal stuff.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

application architecture, serverless

In CS-343, we’ve gone over three major architectures for designing applications. These three are the monolith, client-server and microservices architectures. Each has its strengths and weaknesses, but it seems as though if you can go with microservices, you should go with it as it provides stronger scalability and reliability even if it is much more complex.

From this, it seems like you would use the other architectures if you cannot afford to build a microservices architecture currently, do not have the time to do it, or if your team does not have the required skills / manpower to build up the architecture. It makes sense in the case where the application doesn’t need to be complex, but as time progresses, perhaps it will be that complex in the future, and having built from a monolith architecture ends up being a nuisance.

Out of curiosity, I wanted to learn about some additional types of architectures. I found this blog post written by Paul Gillin, and what stuck out to me was the serverless architecture model. His explanation of it is that it is an evolution of the microservices architecture, with the main departure being that it is, well, serverless. Services are run from software containers as opposed to being pulled from a server.

This idea interests me, not just because of not needing to construct a server with it, but also because it utilizes containers in a very effective way. In this course, I was interested in the practical uses of containers outside of simplifying development within a team, and it seems like this implementation is what I was looking for. Of course, there are some cases where a serverless architecture wouldn’t work, and this architecture is mostly suited for experienced teams due to its complexity, but it’s an interesting idea nonetheless.

After looking at other websites, I must mention that there is a distinction between a pure serverless architecture and one that utilizes containers. This webpage on the serverless architecture from DataDog makes this clear in the “Serverless Architecture vs. Container Architecture” section. Essentially, with a pure serverless architecture the cloud provider (AWS, for example) manages their servers, which means that you don’t have to manage it, but you do have to work with what they give you. With a container architecture, you have to update and maintain your containers, system settings, and dependencies for everything to work properly in place of a server.

With this distinction in mind, I definitely wouldn’t want to rely on an external cloud service, and so a container-based architecture does seem more appealing. Ultimately, though, every tool (and architecture in this case) has its uses, so it’s important to know and understand many architectures so you know what to use when you need to use it.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

law of demeter

During the course of this course (Software Construction, Design, and Architecture), there have been design concepts that are very easy to grasp at first glance, and those that take much more time to digest. I was surprised to see that the Law of Demeter, or Principle of Least Knowledge, is a fairly intuitive rule, but feels almost too restrictive in nature.

Essentially, the Law of Demeter is the concept that methods in an object should not communicate with any element that isn’t ‘adjacent’ to it. According to a blog post by JavaDevGuy (and thinking of a Java application to the rule), the elements that are allowed by the law are the object itself (this), objects in the argument of the method, instance variables of the object, objects created by the method, global variables, and methods that the method calls.

This is most easily explained by a negative example. For example, if a class Car has a method with a Dashboard object as an argument, it can technically call something like dashboard.getVersion(). But if a class Garage method has a Car argument, the method should not call something like car.getDashboard().getVersion(). Maybe this is a silly example, but this applies to more practical code as well.

JavaDevGuy goes further to say that most Getter methods violate this law. This interpretation seems restrictive, as it makes it much more difficult to just get work done (of course, I’m not the most experienced in the design aspect of software engineering so I could be wrong). It seems more practical to use the law to get rid of chaining in your code, as it causes needless complexity. Chaining methods together, regardless of how necessary it is, always ends up looking kind of ugly. I do feel like it is a necessary evil sometimes though.

As it stands, I can understand that this sort of practice can minimize that amount of complexity and reduce code repetition, but it does feel like sometimes you sort of need to put things together in this way to get the desired output. The aforementioned blog post seems to explain when code is violating the law, but unless my eyes and brain are too tired to read properly, the author doesn’t really give any good replacement options for the code. The few alternatives given don’t seem very desirable. This is typically the problem with negative rules, it imposes a restriction without a solution, and so you have to scramble to figure out how to work it.

Perhaps I’ll understand better when we cover this material in class.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

learning concurrency

I’ve heard these terms a lot, concurrency and multithreading, but I never really bothered looking into what they actually do. All I’ve really known about these terms was that they make things run faster. I mostly associated multithreading with hyperthreading, I’ve known that CPUs can use multiple cores for the same application to speed up runtime and make games perform better, but I was always sort of confused about how some games have issues with actually taking advantage of modern high-end CPUs. Usually, this is fixed by some sort of modification, if it’s been written already. That being said, my association between the two is only really surface level.

Hyperthreading is really just related to the physical hardware, which makes it different from multithreading in programming. Instead, multithreading is a form of concurrency in which a program has multiple threads that can run simultaneously, with each thread having its own operations and processes that it executes. What this ultimately means is that within a program, multiple operations can be executed at the same time.

This really fascinates me coming from the perspective of only writing straightforward code. While I sort of knew intuitively that programs can probably do multiple tasks at the same time, I’ve only experienced this on the end-user side of things, rather than the individual writing the program. After looking into how threads work in Java on a BairesDev post regarding Java concurrency, I can really see how powerful of a tool concurrency can be for runtime efficiency. This post essentially goes over what concurrency is and the ‘basics’ of utilizing built-in multithreading functions in Java, along with the advantages and disadvantages that this design comes with.

Of course, it does seem like it takes a careful approach to make sure that the implementation of such a tool is actually beneficial to the project. Even with this relatively simple tutorial, I did find myself a little confused at some points, particularly at the point where Locks are introduced. Though this could be the effect of the hour at which I decided to start writing this blog post, it still stands to reason that the complexity of writing a multithreaded program may result in a more difficult development process, especially in debugging.

Regardless, I was truly fascinated by the subject matter and I’m really excited to be going over concurrency in our course (CS-343). This seems like a tool I would really like to use as I enjoy toying with logistics in video games and the like.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

code review, what it is and why it matters

For my first blog post for CS-348 and in general (in terms of actual content), I wanted to look into code review. I already had an inkling as to what it could entail, but I wanted to know what sorts of techniques and tools are used in looking over our peers’ (and our own) code.

For this blog post, I consulted a post on SmartBear to get a better understanding of it all. The post establishes reasoning for why we need code review so that we can overall reduce the excess workload and costs that can be caused by unreviewed code being pushed through. The post also gives us 4 common approaches to code review in the current day (which is noted to have been very much improved from methods found in the past). These approaches are email threads, pair programming, over-the-shoulder code review, and tool-assisted reviews.

An email thread provides advantages in versatility but sacrifices the ease of communicating that you get in person. Pair programming is the practice of working on the same code at the same time, which is great for mentoring and reviewing at the same time as coding, but doesn’t give the same objectivity as other methods. Of course, over-the-shoulder reviews are simply having a colleague look over your code at your desk, which while fruitful, doesn’t provide as much documentation as other methods. Lastly, tool-assisted reviews are also straightforward, utilizing software to assist with code review.

The SmartBear post goes on to say that tracking code reviews and gathering metrics helps improve the process overall, and should not be skimped out on. Some empirical facts from Cisco’s code review process in 2005 are given as well. According to an analysis of it, code reviews should cover 200 lines of code or less, and reviews should take an hour or less for optimal results. Some other points are given as well if you visit the post.

Considering most of my ‘career’ has been independent coding (that is, coding as the sole contributor), this was rather interesting to me. I’ve done code reviews for my peers, helping them with assignments and the like, while I’ve only really utilized tools and software to assist myself. It’s interesting to see how something as simple as looking over someone’s code on their computer is such an important step in the software development process, but it certainly makes sense. I also wonder how much the code review process has changed since the popularization of AI companions such as ChatGPT and Github’s Co-Pilot. Perhaps these tools have made code review with our peers less important, but I wonder if it’s more important to have our peers second-guess the AI’s suggestions in case of mistakes. Nonetheless, having a solid grounding of the actual ramifications of code review will prove very useful during my programming career, I am sure.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

‘hello world’ etc

This blog is mostly going to be utilized to fulfill course requirements for my CS-343 and CS-348 courses, along with any future courses I take that will include blog posts as part of the specifications for course completion. I’m not the biggest fan of writing, but I do think it may be a useful exercise overall.

Let’s see where this takes us.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.