Category Archives: Programming

Code Review

Source: https://about.gitlab.com/topics/version-control/what-is-code-review/

This article is titled “What is a code review?” As clearly stated by the title, the article explains the processes of code reviewing. “A code review is a peer review of code that helps developers ensure or improve the code quality before they merge and ship it.” Code reviews help in the identification of bugs, increase the overall quality of code, and enhance understanding of the source code. Code review, as suggested in the name, happens after a software developer has finished coding. Code needs to be checked before it is merged into an upstream branch for bugs or conflicts. A code reviewer “can be from any team or group as long as they’re a domain expert. If the lines of code cover more than one domain, two experts should review the code.” Adhering to a solid code review process allows for continuous improvement of code and aims to ensure that faulty code isn’t being implemented for customers/users to see and use. This process isn’t just important for the code itself, but also for all of the team members of a software development project. Whilst reviewing the code, meaningful knowledge of the source code is shared between team members to ensure that it is being implemented properly. The main benefits of the code review process are: the sharing of knowledge, discovering bugs earlier, maintaining compliance, enhancing security, increasing collaboration, and improving code quality. Code reviews allow for maintaining compliance because different developers have different backgrounds and thus different personal processes when they are developing. Code reviews allow these people to get together and maintain a standard coding style. Security is enhanced because “security team members can review code for vulnerabilities and alert developers to the threat. Code reviews are a great complement to automated scans and tests that detect security vulnerabilities.” There are many benefits to code review, but there are some disadvantages, including: longer time to ship, focuses being pulled from other tasks, and large reviews mean longer review times. These can be described as necessary evils due to the sheer amount of positives that code reviews offer in software development.

I chose this article because it was published by GitLab, a software that we are heavily using in class for version control, and I thought that it would be interesting to read this specific topic from the syllabus. Version control softwares such as GitLab allow code reviews to happen, so diving deeper into the topic in an article published by this popular software company was tempting. Before reading this article I understood that code reviews were important to pinpoint any bugs or difficulties before merging code into the upstream, but I never really thought about the implications of security or different development styles. I’ll definitely keep this information in mind during future code reviews on the job to remind myself that bugs aren’t the only important thing during a code review.

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

Why Git

Why is it always git

Photo by RealToughCandy.com on Pexels.com

The version control system that every programmer uses. Even in my computer science class, we had lectures dedicated to Git, the commands in Git, how Git is used, what Git is used for, and just so much Git. The funny thing, is there are other version control systems such as Mercurial, but they aren’t ever brought up they are there but feel overshadowed by git. So the question I am asking now is why Git. So I did some investigating.

The question: what does Git do that is so special compared to other version control systems? Now version control systems can do all sorts of things such as allowing developers to see what has been changed, enable collaborative work, and branch and merge changes to a repo. If multiple can do this, then what does git do differently? An article from Geeks For Geeks lists several. Git can be worked on offline and is resilient because multiple developers can have copies of the repo, and any local repo can be used to restore a project. It also comes with conflict resolution that’s allows one to handle merge conflicts by providing tools to solve those problems. So what about the other systems. Well, GFG got that covered. Here are some comparisons.

Subversion

Compared to Git, the architecture is centralized, one single central repo

Fewer branching and merging options

Better performance

Mercurial

Smaller community compared to Git

Not as much flexibility as Git

Perforce

Can handle very large code base

Not as flexible as Git in terms of merging

Git is Open Source and Free, while Perforce isn’t

That is a decent amount of reasons to use Git over other VCS. I think the community part is important for such a popular system, because if you aren’t too familiar with the commands that come with Git, then you have a lot of people that can help. There are a lot of forums and articles about Git tools out there if you ever need it.

I also feel that the collaborative aspect of Git is, very helpful. A lot of projects have a lot of people working on them, so having something like Git that can handle it and make the task easier is great. Also, the fact that it is accessible helps with that too.

Git being so popular makes a lot of sense now, accessibility, community, and collaboration are what a lot of developers require, and I have to say Git provides that well.

GeeksforGeeks. (2024, September 19). Git vs. other version control systems: Why Git stands out? https://www.geeksforgeeks.org/git-vs-other-version-control-systems-why-git-stands-out/

From the blog Debug Duck by debugducker and used with permission of the author. All other rights reserved by the author.

AI Is Not A Software Engineer

In this blog, the author discusses how much the times have changed for new CS graduates. Reminiscing about how little they knew and how easily they got a job. Then talks about how much more prerequisite knowledge is needed to even sniff a job. The topic of the article is how now more than ever it is easier to get code that works. Thanks to AI, code is now more plentiful than it ever was before. However, all code is not good code. This leads to them discussing how despite how much code there is these days. Having people capable of understanding and able to build software are still very necessary. 

Although AI can now code for us, the coding wasn’t the hard part in the first place. The hard part was building software, and making good software. It’s easy to throw a bunch of code snippets together that accomplish something. But it is something entirely different to build specialized software that fills certain functions and meets certain criteria. AI cannot replace people, even though it may take away some jobs. At its heart, AI cannot build unique software. Teams of capable developers are still needed. The nature of how people code is changing. It’s becoming more important to be able to harness AI, but still oversee and build functional software.

I chose this article because I think it relates to team building. Like the article said, you need people who can understand code, not so much write it. Writing code is easier than ever, but finding people who understand how to build software is harder than ever. When using these tools it’s important not to rely on them too much. Discerning who can actually code these days is probably one of the most important skills for employers these days.  I think it’s important for me and everyone to keep in mind that AI is a tool. Tools dont make up for lack of knowledge. Tools are used best by people who know how to use them and maximize their use. One tool can’t solve every single problem. At the end of the day, knowledge is the most important part of being a software developer. 

Citations

https://stackoverflow.blog/2024/06/10/generative-ai-is-not-going-to-build-your-engineering-team-for-you/

By Charity Majors

From the blog CS@Worcester – Code Craft by Kyle Tucker and used with permission of the author. All other rights reserved by the author.

Trying to use Rest API

In this blog post, I’ll share my thoughts on an article I read titled “What is a REST API?” from Cleo’s blog. This article dives into the concept of REST APIs (Representational State Transfer), and after reading it, I feel like I now have a much clearer understanding of how REST APIs work and why they’re so important in modern web development. This topic ties directly into our web development course, where we’re learning about web services and how to connect different systems.

The article explains what REST APIs are and why they are widely used. It starts by explaining the core principles of REST, such as statelessness and resource-based URIs (Uniform Resource Identifiers). In simple terms, REST APIs allow different software systems to communicate over the internet by sending requests (like GET, POST, PUT, DELETE) to a server, where each request is independent and contains all the necessary information to be processed. The article also discusses the scalability and flexibility of REST APIs, which make them a popular choice for building web applications that need to handle a large number of users or integrate with other services.

I chose this article because I’ve heard the term “REST API” thrown around in class and in tech articles, but I never fully understood how they work. As a computer science beginner, I often find myself struggling to grasp concepts like APIs and how they fit into the bigger picture of web development. Since we’re covering APIs and web services in our course, I figured reading a simple, clear article would help me solidify my understanding of this important topic.

After reading the article, I feel much more confident about my understanding of REST APIs. Before, I knew APIs were used to transfer data between different applications, but I didn’t fully understand how REST APIs specifically work. The article’s explanation of statelessness was particularly eye-opening to me. I had no idea that each request in a REST API is self-contained, meaning it doesn’t rely on any prior interactions to be processed. This makes sense when you think about how web applications need to be scalable and efficient—keeping things stateless helps ensure the server isn’t overloaded with unnecessary data.

Another thing I found interesting was the explanation of how RESTful APIs use HTTP methods (like GET and POST) to interact with resources. It made me realize how intuitive and flexible REST is for creating services that can easily be integrated with other software systems. I now feel much more comfortable working with APIs.

I want to explore more advanced topics, like authentication and error handling, which the article briefly touched on. This will help me build more secure and reliable web applications.

Resource:

https://www.cleo.com/blog/blog-knowledge-base-what-is-rest-api

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.

Introduction to Pattern Designing

Source: https://www.geeksforgeeks.org/introduction-to-pattern-designing/

This article is titled “Introduction to Pattern Designing.” In regards to software development, “pattern designing refers to the application of design patterns, which are reusable and proven solutions to common problems encountered during the design and implementation of software systems.” These reusable design patterns showcase relationships that occur between classes or objects. They are language dependent, so they can be described as an idea that makes code flexible and overall speeds up the process of development. Their purpose is to solve common problems. There are three main kinds of design patterns, creational, structural, and behavioral. “Creational design patterns abstract the instantiation process.” Creational design patterns offer a sense of flexibility in regards to “what gets created, who creates it, how it gets created, and, when.” Knowledge about which concrete class is being used is encapsulated and the way instances of classes are created is hidden. “Structural design patterns are concerned with how classes and objects are composed to form larger structures.” Inheritance is used to create interfaces/implementations. Structural design patterns are good for when you want to make independent class libraries collaborate effectively with one another and offer flexibility regarding object composition. “Behavioral design patterns are concerned with algorithms and the assignment of responsibilities between objects.” Patterns of communication are being described here. Inheritance is used to divide behaviors between classes, object composition is used for behavioral object patterns, and the object patterns encapsulate behaviors in objects. Overall, the benefits of pattern designing are reusable solutions, scalability, and abstraction/communication. The downfall of it however is that there is a learning curve while you try to understand the patterns, there may be concerns with when you should apply the patterns in your code, and if patterns aren’t implemented consistently and in correlation with the advancement of the system, maintenance issues may occur. But regardless, they are a great way to solve common problems during the development process.

I chose this topic because the idea of design patterns was in the syllabus and it interested me. We learned about design patterns such as Factory, Strategy, and Singleton, but reading about the larger terms of creational, structural, and behavioral patterns offered deep insight into the topic. The supposed benefits of common methodologies in software development are always presented but it is also good to know about the downfalls, which I am glad this article showed about the design patterns. When I am working on a team or in the workforce, I will definitely reference these design patterns to improve the maintenance capability and scalability of my code, and do so in a way which I am able to avoid the downfalls of implementing them incorrectly. 

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

Smelly and Debt

I recently read an article on Opsera titled What Is Code Smell? that explores the concept of code smells and how they relate to technical debt. The article explains that code smells are indicators of deeper issues in software design, like redundant code, overly complex functions, or lack of proper documentation. While these smells don’t necessarily cause bugs, they can make the code harder to maintain or extend in the future. Technical debt, on the other hand, refers to the trade-off between short-term efficiency and long-term code quality. It’s like borrowing from the future to meet deadlines now, but it eventually has to be repaid with interest—usually in the form of extra work to fix the issues caused by the shortcuts taken.

I chose this resource because it gives a practical explanation of two topics that I’ve encountered in my software engineering classes: design smells and technical debt. These are concepts that seemed theoretical at first, but this article helped me understand how they show up in real-world projects. As I start working on my own coding assignments, I can see how these issues might impact my projects if I don’t pay attention to them early on.

The article made me realize just how crucial it is to identify and address code smells early in the development process. For example, the article points out that long methods and duplicated code can be a sign of poor design that will slow down future changes. At first, I thought that refactoring or improving code design was something only necessary when a project was nearing completion. But now I understand that addressing these problems early can save a lot of time and effort in the long run.

What really stood out to me was the connection between technical debt and long-term project maintenance. As a student, it’s easy to think that as long as the code works, it’s good enough. But this article emphasized that taking shortcuts to meet deadlines may create technical debt that leads to problems later, such as bugs or a codebase that’s difficult to work with. I’ve already seen this in my own projects—trying to push through a solution quickly, only to realize later that the code is harder to manage than I expected.

In the future, I plan to pay more attention to clean code practices. I’ll aim to refactor code regularly and avoid taking shortcuts that might seem like a quick fix but could lead to bigger problems. This approach will not only improve my coding skills but also make my future projects more maintainable.

Resource:

What Is Code Smell? – Opsera Blog

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.

Understanding SOLID Principles: A Guide

As a student learning software design, I’ve heard about the SOLID principles in class, but I wanted to dive deeper to understand how to actually use them. I came across a blog post called “SOLID Principles — The Definitive Guide” by Midhun Vincent on Medium, which breaks down each of the five principles in a way that makes sense for someone new to object-oriented design. The guide was really helpful and lined up well with what we’re covering in my course, so I thought it would be a good opportunity to see how these principles could improve my coding now and in the future.

The article explains the SOLID principles, which are five important guidelines for creating object-oriented software that’s easier to understand, maintain, and extend. The first principle, the Single Responsibility Principle (SRP), says that each class should do only one thing, making it easier to maintain and modify. The Open/Closed Principle (OCP) suggests that classes should be open for extension but closed for modification, meaning you can add features without changing the original code. The Liskov Substitution Principle (LSP) ensures that subclasses can replace their parent class without breaking the system. The Interface Segregation Principle (ISP) advises creating small, specific interfaces rather than large, general ones. Finally, the Dependency Inversion Principle (DIP) suggests that high-level modules should depend on abstractions, not low-level modules, which makes the code more flexible. These principles help make code cleaner, more modular, and easier to adapt over time.

I picked this article because, while the SOLID principles are useful, they can seem pretty abstract at first. The post explains them in a way that feels practical, with examples that make it easier to apply the principles to real-world coding problems. Plus, the examples connected well with the projects I’ve worked on in my course, especially when it comes to organizing code and making it easier to debug. Seeing how these principles prevent code from becoming too messy gave me a new way of thinking about my own assignments.

My Takeaways and Reflection

Before reading this post, I knew the basic ideas behind SOLID, but I wasn’t sure how to apply them in my own code. Now, I get why each principle is important and how they can save time by reducing debugging and refactoring. For example, the Single Responsibility Principle made me realize that I often give classes too many responsibilities, which complicates fixing bugs. By applying SRP, I can keep things simpler and reduce errors.

Looking ahead, I plan to use these principles in my projects, especially the Open/Closed Principle and Interface Segregation Principle. I can see how they’ll help me write code that’s easier to update and adapt. Understanding SOLID will definitely give me a strong foundation as I take on more complex projects in the future.

Resource:

View at Medium.com

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.

Version Control

Source: https://www.spiceworks.com/tech/devops/articles/what-is-version-control/

This article is titled “What Is Version Control? Meaning, Tools, and Advantages.” The main purpose of version control is to “track the progress of code across development and iterations and also aids in managing changes during the life cycle.” Records are kept of all changes with names, timestamps, and other important information. So, the process by which software code is monitored and the way in which changes are made is called version control. A huge benefit of version control is being able to look at the revision history and determine where problems originated from, and who caused them to happen. This allows for increased efficiency regarding workflow considering that the time required to locate problems is greatly reduced. Another benefit of version control is branching. “Branching is a distinct approach to version control where development programs are duplicated for parallel versions of development while keeping the original and working on the branch or making separate modifications to each.” This allows for enhanced collaboration where development is increased, issues are resolved, and code remains organized. A couple very popular version control tools are Git and GitHub. The creator of Linux, Linus Torvalds, created Git. The memory footprint of Git isn’t vast and is able to follow changes in any files. It is a very simple version control system and as a result is used by top companies such as Google. GitHub is a service that enables development teams to collaborate and keep track of all their code changes in a cloud environment. GitHub is secure and reliable, and as a result is also widely used. Through the use of a version control system the following can be achieved: “streamline merging and branching, examination/experimentation with code, the ability to operate offline, creation of regular/automated backups, communication through open channels.” Overall version control aids in the maintenance of reliable code bases and enforces accountability for effective collaborative development. 

I selected this article because we are actively learning about version control in class right now so I figured it’d be the perfect time to read up on it more. Reading the GitKit chapters has exposed me to different git commands and GitHub usage. It was interesting to read in this article about all of the in-depth benefits that version control offers and clearly showcases why even top companies such as Google use it to optimize their workflow. In future practice, whether it be at a job or while working on an individual project, I will use version control to improve collaboration and the ease of maintenance of my code.

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

Anti-Patterns

Source: https://www.freecodecamp.org/news/antipatterns-to-avoid-in-code/

This article is titled “Anti-patterns You Should Avoid in Your Code.” It specifically mentions six of them, being: Spaghetti Code, Golden Hammer, Boat Anchor, Dead Code, Proliferation of Code, and the God Object. An anti-pattern, in regards to software development, is an example of how not to solve a problem in a codebase. They are not a positive thing, they are examples of practices to avoid in the development process. Anti-patterns lead to technical debt, code that you have to eventually come back to and properly fix later. Spaghetti Code is the most common, it is code that doesn’t have much structure. It is called Spaghetti Code because everything is difficult to follow, files are located in random places, and when visualized in a diagram, it appears to be a jumbled mess, much like spaghetti. Golden Hammer references a scenario where you follow a certain process that doesn’t necessarily align perfectly with the project but still works well enough. This may not seem like a large issue, but is obviously not the best practice to follow because it’ll cause performance issues in the long run. You should always use a process that is the best fit for your project, even if you need to teach yourself or learn something new. Boat Anchor is when developers leave code in the codebase that isn’t actively being used in the hopes of it being needed later and thus not requiring much effort to implement when it is eventually needed. The main problem with this is when it comes to maintaining the code. It leads to the question of what code in the codebase is unused and what is being actively used. Trying to fix a bug in the system on code that isn’t even being used is a time waster. Dead code is code that doesn’t look like it’s really doing anything, but it is being called from many different places. This leads to problems when trying to modify the code because no one is unsure what is actually dead. Proliferation of Code is about objects that have the purpose of invoking a more important object, meaning it doesn’t really do anything on its own. The action of invoking the more important object should be set to the calling object. Lastly, the God Object is an example of an object that does too much. Objects should only be responsible for doing one thing, referencing the Single Responsibility principle in SOLID. 

I chose this particular source because I appreciated the way examples were clearly given along with the 6 examples of anti-patterns, and upon reviewing the syllabus the topic “anti-patterns” seemed interesting. When you’re learning computer science a lot of the time you’re learning about things that you should do and not about things that you shouldn’t do. I really enjoyed reading about these 6 examples of common mistakes that developers make in industry. It’s important to both recognize good and bad practices to ensure that your projects are properly optimized. I can definitely see myself referencing anti-patterns when designing code in the future so my code can easily be maintained. 

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

Was it really all about frameworks?

What’s a Framework? All About Software Frameworks is an article written by Luke Stahl. I found it on dev.to, but it can also be found on the Contentful Blog. Links to both blogs are available in this post. The reason I chose this article is that I’ve heard a lot about frameworks since I started exploring web application development, and they always confused me. I used to struggle to understand what they were and what they were built for. Luke Stahl’s article answered both of these questions for me, though it also left me with a few more.

The article provides a general overview of frameworks for software development, web development, and working with APIs. However, it then pivots toward web development frameworks, which left me wondering about the differences between them. While it clearly explains the distinctions between backend and frontend frameworks, it misses covering some other differences I initially expected.

The author also includes a fairly extensive list of the benefits that frameworks bring to development, along with a brief list of challenges. For me, it’s always a bit concerning when something presents itself with so many benefits and so few downsides. Still, the benefits outlined are appealing and useful for developers.

Frameworks, as explained by Luke Stahl, are essentially blueprints or templates for a particular final product. They provide the essential building blocks and materials required to create your software or web application. A framework offers a skeleton for your application, allowing you to build functionality on top of it.

Halfway through reading the article, I began to wonder about the difference between a framework and a code library. Thankfully, and to my surprise, it seems Luke anticipated this question. He includes an explanation from two outside sources, David Fateh and Alvin Bryan. Both summarize that frameworks act as templates, while code libraries serve as tools you can use to build on top of that template.

One point that caught my attention—and that I believe is very important—is that most frameworks are FOSS (Free and Open Source Software). Being free and open source brings many advantages. Such products undergo extensive testing by a diverse group of programmers, across various applications, which increases the test sample size. This added testing leads to greater reliability, as it helps ensure that any new functionality works as intended. Another benefit of free and open-source products is the large community that often forms around them. This means that if you encounter questions or difficulties, it’s likely that someone else has already addressed them.

From the blog CS@Worcester – CS Today by Guilherme Salazar Almeida Nazareth and used with permission of the author. All other rights reserved by the author.