Author Archives: V

some design principles

We’ve covered a great many design principles during the course of this semester in Software Construction, some of which I’ve even covered in these blog posts (law of demeter comes to mind). For the end of the semester, I wanted to have a little review on some of the principles that I don’t recall all too much.

Starting off with one that isn’t too complicated, I wanted to briefly refresh on the YAGNI (or You Ain’t Gonna Need It) principle. According to a blog post by Tatum Hunter of Built-In, the practice entails only building features when needed. I found this post fairly insightful in the way it goes over how customers might want a large-scale feature now, so you may have to talk them down to a more realistic goal to avoid adding functionality that won’t be necessary, or to say no.

The principle of “striving for loosely coupled designs between objects that interact” is essentially implementing the observer design pattern. I believe I went over this in a previous blog post, but I did find another post by Harold Serrano that provides a brief summary as well. Serrano states that the principle means that objects should be able to interact with each other, but shouldn’t know much about each other.

For the principle of “encapsulating what varies,” a simple blog post from Alex Kondov explains what this means and why we do it. Essentially, we want to encapsulate the parts of the code that we write which are prone to change so that we don’t have to change a whole block of code for something that should be a one line fix. This makes our code adaptable and leads to cleaner code.

Inversion of control is used for abstraction simplicity. Kent C. Dodds explains that we want our abstraction to have less responsibility, while the user has more. He uses an example of a filter method that uses inversion of control, and one that doesn’t. The difference is that when the control is passed into the method rather than handled in the method, there is a lot less going on within the method, which increases simplicity. I found this really interesting because I was thinking about doing this for our GuestInfoBackend homework, before I kind of lost interest because I’m not too well-versed with Javascript, and I didn’t have much time to do it, hah. Needless to say, it’s a really interesting tool in my opinion and I find it very practicable.

This semester, we’ve gone over a great many ways to ensure proper software design, and these practices and skills are a great way to streamline your thought process. I think the important thing, as I’ve said before, is to not treat these as concrete rules, but to consider them in higher priority before writing code. Sometimes, you can’t get the perfect solution that fits all the proper software design principles, and that’s okay. It’s a matter of making sure the code is solid, pun intended.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

process models

With the final blog post of the semester for Software Process Management, I wanted to review the process models we went over (waterfall and agile approaches), and perhaps take a look at some more models that we didn’t go over in class.

The waterfall approach as we went over it in the semester essentially that everything is planned at the start of the development process. This means that the plan is rigid during the course of the process. With the agile approach, the plan is highly flexible and able to change during the course of the process. Each increment of development, usually a few weeks, is followed by a plan for the following few weeks, and the cycles continues, with each aspect of the plan being adjustable as needed.

The agile approach is great for large projects and very versatile, but what other process models are there? In a blog post written by Omar Elgabry, he lays out two models that sort of make up the agile process model (incremental and iterative), as well as two other models, the spiral model and the prototype model.

Both the incremental and iterative models are based on increments of development, but the increment is the difference between them and the agile model. With the incremental approach, a complete feature is completed with each increment. With the iterative model, each increment is a small portion of all features. Compare with agile, in which each increment is a small functional portion of each feature.

The prototyping process isn’t a whole process by itself, it’s rather a tool in the form of a process to test the feasibility of a project. It’s in the name, a prototype is quickly built according to a customer’s requirements, and is helpful when the resource cost of a full project isn’t clear. The customer is usually in the loop for the development of the prototype. Once this prototyping phase is complete, the development team can opt for another process model to move forward.

The spiral model is used for cases where there is high risk associated with the project, typically large projects. The model is rarely used, but is good for testing feasibility. Essentially, each loop in a visual spiral is a phase of the project, and each phase is made up of an objective setting phase, a risk analysis phase, a development phase, and a planning phase where it is determined if the development should continue into another phase.

While I appreciate the spiral model for it’s uniqueness, I can definitely see why most teams use the agile model. It’s essentially a direct improvement from the incremental and iterative models in terms of versatility, and if you need a prototype, you can still build one using that process. Still, it’s interesting to know that we are still trying to find more ways to streamline our development to be more efficient, and I’m sure someday even agile will be taken over by another model.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

software design patterns

We’ve gone over a couple of design patterns in class this semester. Some that come to mind are the singleton and factory patterns, both of the creational variety. This latter half has been more focused on practical applications of software construction, taking a look into the Thea’s Pantry system that uses a microservices approach, but earlier on we were working with design decisionmaking and approaches.

A software desing pattern is a reusable solution to a problem that occurs often. The three main types of design patterns are creational, structural and behavioral. This is all well and good, but what do these types entail? I found this blog post from NetSolutions regarding the subject, and it gives a brief explanation of each as well as some examples.

Creational design patterns are solutions for the creation of objects and how they are used. The aforementioned singleton and factory patterns are creational because they dictate the way that objects are meant to be created based on the circumstances surrounding it. Singleton ensures a single instance of an object that can be called, while the factory is a sort of constructor of objects that utilizes an interface to form an object from sub-classes.

Structural design patterns relate to object composition to form better flexibility when working on a large-scale project. For example, an adpater is a structural design pattern that allows for incompatible interfaces to work together, creating greater flexibility in the program. The facade pattern is a sort of application of encapsulation and abstraction, allowing for the hiding of complexity with an interface, making it easier to work with.

Behavioral design patterns deal with separate objects and the way they share responsibility and communicate. The strategy design pattern is an example of this, where algorithms are put into a family where each algorithm is only called when it needs to be called, increasing efficiency. The observer pattern links one object to many dependents, notifying the observer object whenever any depedent has experienced any event.

It seems important to have these tools in your belt, considering that many of the problems that these design patterns solve occur fairly regularly even when developing different pieces of software. The one thing I would consider being wary of is using software design patterns at all times when you may not need it. For example, the factory pattern is best used when working on a fairly large project, but is not necessary at a smaller scale. Of course, when a project goes from small to large, a refactor may be called for, but perhaps the factory pattern is not the best tool for the job, it’s instead a builder or prototype. There is no one-size-fits-all solution, and so it’s also a skill to know when and how to use patterns and not just apply them randomly.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

software testing

I’ve been coding since around 2017, when I took my first Computer Science course in high school. Since then, I’ve worked on plenty of school projects and a couple of personal projects as well. Software testing had never been a thing I really bothered with meaningfully in this timespan, which is kind of fair considering that most of these projects and assignments weren’t really too complex. I would just print out results for each bit of code as I went along to make sure that the code I just wrote actually works, but I didn’t really go and write actual automated tests.

This semester, I had some exposure to standardized software testing when I worked on a homework assignment in my Software Design and Construction course that required me to adjust some code along with the tests that make sure the code works as intended with intended output. At first, I was sort of confused considering I had never worked with test files, but I appreciated the streamlined nature of this testing. We also had to read up on some software testing methods for a Scrum developer homework assignment we did for this class, Software Process Management.

Today, I wanted to research some software testing basics, and came across a post on IBM’s blog. The post goes over some reasons to software test (the importance of quality control for businesses, what testing even is, etc.) along with types of software tests and practices.

The different types of software tests are fairly easy to understand, and I’ve come across all of them in some capacity before. Acceptance tests ensure the system as a whole functions as intended, integration tests ensure that software components work as intended, and unit tests ensure the smallest testable snippets work as intended. Performance tests evaluate how well software works on different hardware, and stress tests check how much load the software can take without breaking. Regression tests see if new features harm the software at large, and usability tests evaluate how effective the software is for end-users.

Some good testing practices listed in the article are continuous testing, bug tracking, and service virtualization. Continuous testing is the practice of testing each build as soon as it’s available, fairly simple. Bug tracking is, well, bug tracking. The post mentions automated monitoring tools for tracking defects, but I sort of think of this in the sense of keeping an issues list on a git repository, as that’s what I’m more familiar with. Service virtualization is a little more complicated, it simulates functionality that hasn’t been implemented yet to reduce dependencies and test sooner.

What I’m mostly interested in is the applications of these practices, and I’ll likely look into it further at a later time. I understand these concepts fairly well, but the idea of automating tests, at least at my current understanding of it, sounds a bit daunting. I’m sure it saves a lot of time on larger-scale projects, where testing each piece of your code manually for multiple cases will take a lot of time. I’m interested to see what this looks like in an example.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

anti-patterns

During our Software Construction, Design and Architecture class, we’ve gone over a multitude of different design techniques, patterns and tools to ensure that we write quality code. We’ve also touched on some design / code smells and the concept of technical debt. While design smells can give you an indication of something that might be wrong with your code, whether in a small section or in the overall design, anti-patterns represent a much more obvious, prevalent problem in your code.

Some of the most common examples of anti-patterns I’ve heard in conversations I’ve had with my friends, classmates and even teachers in a joking matter, like spaghetti code, and copying and pasting code snippets from random repos (which isn’t just bad practice for coding, it’d also be possible to infringe upon licenses or the lack thereof placed on that code). I think the reason why these are fairly common among the circles I’ve been in is just because everyone has done it before for a school assignment or something, and it’s just funny to joke about.

Some anti-patterns are a bit more involved with designing software in a meaningful sense, though. In a blog post from Lucidchart, some other anti-patterns found in software development and design are golden hammers, god objects, boat anchors and dead code. What’s interesting about these is that they actually are actual manifestations of design smells, in the most obvious ways.

For example, a boat anchor is a piece of code that isn’t being used yet but might be used in a future version, maybe. This results in needless complexity, an addition that isn’t necessary and makes it harder to read your code. Dead code is similar, code that gets executed but isn’t even used in the output of a program, causing immobility. Using a golden hammer means you are using the same tool for a bunch of different jobs, thinking that it is the best tool for all cases. This contributes to fragility, needless repetition and opacity depending on the implementation one goes about with a golden hammer. God objects are similar in a sense, as they’re objects that do too much and are responsible for too many functions. This violates encapsulation principles that are good practice in software design, while also resulting in immobility, viscosity and possibly even more. Dead code

This reinforces the idea of checking for code smells and refactoring code as needed, because otherwise you will build upon a foundation that was fundamentally problematic, and the rest of your code will reflect it no matter how much better it is. This all ties into technical debt, and brings all of the software design principles, concepts and ideas together into a picture that says: please maintain your code cleanly!

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

copyright licenses

We’ve covered software licenses a fair amount in our Software Process Management course. The MIT license, Apache License, and GNU GPL were the most notable for different reasons, but I figure that there are far more licenses that we haven’t necessarily gone over directly in class. I wanted to take the time to familiarize myself with some of these licenses.

Starting off with a CopyLeft license (a type of license I find myself fairly sympathetic to for the sake of keeping software open source), the Mozilla Public License, or MPL, is a weak sort of CopyLeft. According to this post on fossa.org, this means that users must disclose changes, but for a much narrower set of code for each change, file-based for short. When compared to the LGPL, a notable weak copyleft license, the MPL allows users to sublicense as an additional permission. Notable uses of this license are, of course, Firefox and Thunderbird, along with LibreOffice. I do like the idea of having a middle ground between being completely permissive and also wanting to “spread” open-source.

BSD Licenses are permissive licenses that generally have no restrictions. This license seems to fall in line with the big permissive licenses (like MIT) and doesn’t have a viral effect, but requires a copy of the license and a disclaimer of liability. According to a post on AppMaster, the differences between the MIT licenses and the BSD licenses is there being no requirement for a liability disclaimer for the MIT license, no requirement for patent protection on the BSD license, the BSD license being compatible with GPL, and the MIT license requiring a permission notice. I don’t like the lack of patent protection with this one, I know that’s been a fairly annoying issue in the past for various projects.

Lastly, I took a look into the OpenSSL license, as it was on the front page of tldrLegal. Funnily enough, it’s a license styled off of the BSD license, with the express purpose of being used with the OpenSSL project. On the aforementioned tldrLegal page for the license, it mentions in the summary that the license has a special advertising clause, which means that ads mentioning the software in any way must acknowledge the use of OpenSSL. The takeaway, for me, is that if you know exactly what you need with your license, and one that exists has everything you’d like except for one specific requirement or permission, you can build off of that license with that added requirement or permission (assuming that you have a legal expert of some sort write it, I’m assuming).

Overall, I really appreciate the work that has been done to ensure that we have a variety of different ways to license our software depending on what we want done with it. It’s definitely an important aspect of open-source development, and having the resources to find licenses that are closest to the project’s vision is an excellent way to focus on the development and not have to worry about the legal stuff.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

application architecture, serverless

In CS-343, we’ve gone over three major architectures for designing applications. These three are the monolith, client-server and microservices architectures. Each has its strengths and weaknesses, but it seems as though if you can go with microservices, you should go with it as it provides stronger scalability and reliability even if it is much more complex.

From this, it seems like you would use the other architectures if you cannot afford to build a microservices architecture currently, do not have the time to do it, or if your team does not have the required skills / manpower to build up the architecture. It makes sense in the case where the application doesn’t need to be complex, but as time progresses, perhaps it will be that complex in the future, and having built from a monolith architecture ends up being a nuisance.

Out of curiosity, I wanted to learn about some additional types of architectures. I found this blog post written by Paul Gillin, and what stuck out to me was the serverless architecture model. His explanation of it is that it is an evolution of the microservices architecture, with the main departure being that it is, well, serverless. Services are run from software containers as opposed to being pulled from a server.

This idea interests me, not just because of not needing to construct a server with it, but also because it utilizes containers in a very effective way. In this course, I was interested in the practical uses of containers outside of simplifying development within a team, and it seems like this implementation is what I was looking for. Of course, there are some cases where a serverless architecture wouldn’t work, and this architecture is mostly suited for experienced teams due to its complexity, but it’s an interesting idea nonetheless.

After looking at other websites, I must mention that there is a distinction between a pure serverless architecture and one that utilizes containers. This webpage on the serverless architecture from DataDog makes this clear in the “Serverless Architecture vs. Container Architecture” section. Essentially, with a pure serverless architecture the cloud provider (AWS, for example) manages their servers, which means that you don’t have to manage it, but you do have to work with what they give you. With a container architecture, you have to update and maintain your containers, system settings, and dependencies for everything to work properly in place of a server.

With this distinction in mind, I definitely wouldn’t want to rely on an external cloud service, and so a container-based architecture does seem more appealing. Ultimately, though, every tool (and architecture in this case) has its uses, so it’s important to know and understand many architectures so you know what to use when you need to use it.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

law of demeter

During the course of this course (Software Construction, Design, and Architecture), there have been design concepts that are very easy to grasp at first glance, and those that take much more time to digest. I was surprised to see that the Law of Demeter, or Principle of Least Knowledge, is a fairly intuitive rule, but feels almost too restrictive in nature.

Essentially, the Law of Demeter is the concept that methods in an object should not communicate with any element that isn’t ‘adjacent’ to it. According to a blog post by JavaDevGuy (and thinking of a Java application to the rule), the elements that are allowed by the law are the object itself (this), objects in the argument of the method, instance variables of the object, objects created by the method, global variables, and methods that the method calls.

This is most easily explained by a negative example. For example, if a class Car has a method with a Dashboard object as an argument, it can technically call something like dashboard.getVersion(). But if a class Garage method has a Car argument, the method should not call something like car.getDashboard().getVersion(). Maybe this is a silly example, but this applies to more practical code as well.

JavaDevGuy goes further to say that most Getter methods violate this law. This interpretation seems restrictive, as it makes it much more difficult to just get work done (of course, I’m not the most experienced in the design aspect of software engineering so I could be wrong). It seems more practical to use the law to get rid of chaining in your code, as it causes needless complexity. Chaining methods together, regardless of how necessary it is, always ends up looking kind of ugly. I do feel like it is a necessary evil sometimes though.

As it stands, I can understand that this sort of practice can minimize that amount of complexity and reduce code repetition, but it does feel like sometimes you sort of need to put things together in this way to get the desired output. The aforementioned blog post seems to explain when code is violating the law, but unless my eyes and brain are too tired to read properly, the author doesn’t really give any good replacement options for the code. The few alternatives given don’t seem very desirable. This is typically the problem with negative rules, it imposes a restriction without a solution, and so you have to scramble to figure out how to work it.

Perhaps I’ll understand better when we cover this material in class.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

learning concurrency

I’ve heard these terms a lot, concurrency and multithreading, but I never really bothered looking into what they actually do. All I’ve really known about these terms was that they make things run faster. I mostly associated multithreading with hyperthreading, I’ve known that CPUs can use multiple cores for the same application to speed up runtime and make games perform better, but I was always sort of confused about how some games have issues with actually taking advantage of modern high-end CPUs. Usually, this is fixed by some sort of modification, if it’s been written already. That being said, my association between the two is only really surface level.

Hyperthreading is really just related to the physical hardware, which makes it different from multithreading in programming. Instead, multithreading is a form of concurrency in which a program has multiple threads that can run simultaneously, with each thread having its own operations and processes that it executes. What this ultimately means is that within a program, multiple operations can be executed at the same time.

This really fascinates me coming from the perspective of only writing straightforward code. While I sort of knew intuitively that programs can probably do multiple tasks at the same time, I’ve only experienced this on the end-user side of things, rather than the individual writing the program. After looking into how threads work in Java on a BairesDev post regarding Java concurrency, I can really see how powerful of a tool concurrency can be for runtime efficiency. This post essentially goes over what concurrency is and the ‘basics’ of utilizing built-in multithreading functions in Java, along with the advantages and disadvantages that this design comes with.

Of course, it does seem like it takes a careful approach to make sure that the implementation of such a tool is actually beneficial to the project. Even with this relatively simple tutorial, I did find myself a little confused at some points, particularly at the point where Locks are introduced. Though this could be the effect of the hour at which I decided to start writing this blog post, it still stands to reason that the complexity of writing a multithreaded program may result in a more difficult development process, especially in debugging.

Regardless, I was truly fascinated by the subject matter and I’m really excited to be going over concurrency in our course (CS-343). This seems like a tool I would really like to use as I enjoy toying with logistics in video games and the like.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

code review, what it is and why it matters

For my first blog post for CS-348 and in general (in terms of actual content), I wanted to look into code review. I already had an inkling as to what it could entail, but I wanted to know what sorts of techniques and tools are used in looking over our peers’ (and our own) code.

For this blog post, I consulted a post on SmartBear to get a better understanding of it all. The post establishes reasoning for why we need code review so that we can overall reduce the excess workload and costs that can be caused by unreviewed code being pushed through. The post also gives us 4 common approaches to code review in the current day (which is noted to have been very much improved from methods found in the past). These approaches are email threads, pair programming, over-the-shoulder code review, and tool-assisted reviews.

An email thread provides advantages in versatility but sacrifices the ease of communicating that you get in person. Pair programming is the practice of working on the same code at the same time, which is great for mentoring and reviewing at the same time as coding, but doesn’t give the same objectivity as other methods. Of course, over-the-shoulder reviews are simply having a colleague look over your code at your desk, which while fruitful, doesn’t provide as much documentation as other methods. Lastly, tool-assisted reviews are also straightforward, utilizing software to assist with code review.

The SmartBear post goes on to say that tracking code reviews and gathering metrics helps improve the process overall, and should not be skimped out on. Some empirical facts from Cisco’s code review process in 2005 are given as well. According to an analysis of it, code reviews should cover 200 lines of code or less, and reviews should take an hour or less for optimal results. Some other points are given as well if you visit the post.

Considering most of my ‘career’ has been independent coding (that is, coding as the sole contributor), this was rather interesting to me. I’ve done code reviews for my peers, helping them with assignments and the like, while I’ve only really utilized tools and software to assist myself. It’s interesting to see how something as simple as looking over someone’s code on their computer is such an important step in the software development process, but it certainly makes sense. I also wonder how much the code review process has changed since the popularization of AI companions such as ChatGPT and Github’s Co-Pilot. Perhaps these tools have made code review with our peers less important, but I wonder if it’s more important to have our peers second-guess the AI’s suggestions in case of mistakes. Nonetheless, having a solid grounding of the actual ramifications of code review will prove very useful during my programming career, I am sure.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.