Category Archives: CS@Worcester

A quick look at front-end

Hello! For this quarter’s blog I read a post written by Jeff Bridgforth, titled “Think like a front-end developer.” Coming to the end of the semester we have started working with the frontend, and I got the impression that it would be useful to see, as with my other blog posts, the insights someone who actually has experience working on it may have. As such, I wanted to find a blog that could give me an idea of the practical priorities and decision patterns used in real projects, this post does that well. 

To quickly summarize, a front end developer is someone who designs what a user/client/etc. would actually see when they interact with a program. It encapsulates everything from the UI to how it interacts with the backend, or what goes on behind the scenes. Jeff outlines the basic mindset he believes front-end developers should have. He explains that the three main languages used for front end (html, css, and js) should be “partitioned” specifically for certain roles: html for structure, css for styling, and javascript for behavior, and that keeping these separate makes everything easier to understand and maintain. He also explains that starting with clear, semantic html should come first, then building from that with css before adding any javascript. He also talks about the importance of being involved early in the design process, keeping things simple and using small, practical tools for tasks like testing and image optimization.

Web dev is something I have at least done a few times before starting this class unlike some other topics we have gone through, so I would say I am comfortable with the “design” side of front-end. That being said, when it comes to having it actually “do things” beyond allowing people to navigate from page to page (in other words, interface with the backend aka working with an API), I was completely inexperienced. Our class has really helped me with getting used to all that, but as we are in a classroom setting, like everything else we have learned there is a lack of practical insight to the material. Over this semester I have realized the value of looking online and finding these blog posts, as the first-hand experience they have informs them as to what they should prioritize, which they end up writing about and thus passing on to readers such as myself. Very useful. Anyways I was a bit unsure exactly how the various programming languages would interact with eachother, if this blog is anything to go off of it seems smart to keep them separate, which makes sense. 

From the blog Joshua's Blog by Joshua D. and used with permission of the author. All other rights reserved by the author.

Correctly applying the Open Closed Principle

Original Blog:https://blog.logrocket.com/solid-open-closed-principle/

For this post, I’ve decided to write about the application of the open-closed principle in programming, and how new features can be added to an existing piece of code without altering existing code. I’ll also be writing about the pitfalls of incorrectly using the Open-Closed principle in circumstances where it isn’t necessary or is excessive.

The Open-Closed principle is a principle in the SOLID foundation of principles, which is an acronym for the its tenants:

S – Single Responsibility

O – Open – Closed principle

L- Liskov Substitution Principle

I – Interference Segregation Principle

D – Dependency Inversion Principle

In the blog, “SOLID series: The Open – Closed Principle”, the author defines the open closed principle as dictating that a program be open for the addition of modules, classes and functions without changes to existing code being made. This can be done, among other methods, through the use of interfaces, which outlines which methods a class must implement without modifying the methods or class themselves, and abstract classes, which provide blueprints for future subclasses. By designing you program around the idea of shared methods within classes and inheritance, you ensure that bugs involving the code within those methods are few and far between. However, criticisms of OCP arise when the abstraction that results from repeated inheritance becomes cumbersome.

In the blog, the author states that many developers feel that the definition of the open-closed principle itself implies inheritance, leading some specific examples of improperly used OCP. The first one the author mentions is “Over engineered-abstractions”. This occurs when the amount of abstractions in a program is unnecessary, which could cause the program to become more complex than it needed to be. This can cause the codebase to become increasingly harder to understand to contributors, leading to possible bugs in the future of the program’s development. Another problem outlined by the author is the “interface explosion” problem. This happens when interfaces are overused in a codebase. The author mentions how the .net ecosystem suffers from this due to it’s reliance on dependency injection. When this is a problem, the codebase can become cluttered and dense.

In summary, the author explained the definition of the open-closed principle, then gave criticisms about the principle based on the environment in which they would be implemented, with inheritance and abstraction sometimes resulting in increase complexity and codebase clutter when implemented in environments that don’t necessarily need them. A thought I had about the material covered in the blog is how the use of the factory design pattern could help in cases of an “interface explosion”, since it would reduce dependencies required by the client for the code to function, and would reduce the amount of locations that would needed to be edited to add a new object of a certain type.

From the blog My first blog by Michael and used with permission of the author. All other rights reserved by the author.

Teamwork and Project Management

https://www.geeksforgeeks.org/software-engineering/software-engineering-software-project-management-spm/

When learning about the entire software building process and an Agile framework, we learned about a better and more efficient way of developing a project as a team. The overarching theme for having a well maintained project that stays its course is having a good software project management system in place. It is important because software is intangible, making it difficult to visualize progress or quality without strict oversight.This article from GeeksforGeeks discusses Software Project Management, a discipline within software engineering focused on planning, implementing, monitoring, and controlling software projects. The goal is to deliver high-quality software on time and within budget by effectively managing resources, risks, and changes.

The practice encompasses several critical aspects, starting with detailed planning to outline scope and resources, followed by the active leading of diverse technical teams. Managers must oversee execution through rigorous time and budget management while also handling maintenance to address defects early. To achieve this, project management employs specialized management strategies, including risk management to minimize threats, conflict management to resolve team disputes, and change management to handle shifting goals. It also involves technical controls like configuration management to track code versions and release management to oversee deployments. Some drawbacks are touched on being that the process can add complexity and significant communication overhead especially with large teams. 

I think it’s important to understand the different aspects of project management and what goes into creating a project as a team.Working as a team is critical in software engineering because modern projects are often too complex and massive for any single individual to handle efficiently. By dividing tasks, teams can work in parallel, allowing features to be built, tested, and deployed simultaneously which significantly speeds up the development process. Beyond just speed, teamwork improves code quality through practices like peer reviews and pair programming, where “multiple eyes” on the code help catch errors that a solitary developer might miss. It can be easy as a student to think that getting in this field will mean sitting behind a desk and working on your own aspect of a project, however, working within a team and adhering to the group structure and work flow management can be a shock to people new to not just the software field but to the workforce in general. When working in a large team it can be easy to stray from the  goal or specifications without strict planning and oversight. Software project management provides the necessary framework to navigate this, ensuring that unique client requirements are met precisely rather than relying on assumptions. 

From the blog Anna The Dev by Adrianna Frazier and used with permission of the author. All other rights reserved by the author.

Understanding Technical Debt: Why It Actually Matters More Than We Think

When I first heard the phrase “technical debt,” I honestly thought it was just a fancy developer way of saying “bad code.” But after reading Atlassian’s article “Technical Debt: What It Is and How to Manage It,” I realized it’s way deeper than that. The article explains technical debt as the cost of choosing a quick solution now instead of a cleaner, long-term one. This can come from rushing to meet deadlines, adding features without proper planning, skipping tests, or even just writing code before fully understanding the problem. What I liked about this resource is that it breaks the topic down in a way that makes sense, showing how debt doesn’t always come from laziness, sometimes it’s just the reality of working in fast-paced software development.

I picked this article because technical debt is something we’ve basically been talking about throughout CS-348, even if we didn’t always call it that. Whether it’s writing maintainable code, designing clean architecture, or keeping up with version control, everything connects back to avoiding unnecessary debt. I’ve heard instructors and even classmates say, “We’ll fix that later,” and this article made me understand the impact behind that mindset. It stood out to me because it not only defined the problem but walked through how teams can recognize debt early and avoid letting it build up until it becomes overwhelming.

Reading this article made me realize how much technical debt affects the entire development process, not just the code. It slows teams down, creates frustration, and makes simple tasks more complicated than they should be. One part that hit me was how the article described debt snowballing over time. It reminded me of school assignments: if you ignore a confusing part early on, it always comes back to make the whole project harder. Another point I loved was the idea of being honest about debt instead of acting like it doesn’t exist. Communication is a big deal in development, and the article made that very clear.

Moving forward, I’m definitely going to be more intentional about how I write and manage code. Instead of rushing through things just to “get it done,” I want to slow down and think about how my decisions today could affect future work, both for me and for anyone else who touches the code. Good documentation, regular refactoring, testing early, and asking questions when something feels off are all habits I want to bring into my future career. Understanding technical debt helped me see software development as a long game, and being aware of these trade-offs will help me build better, cleaner projects in the future.

Source:

https://www.atlassian.com/agile/software-development/technical-debt

From the blog CS@Worcester – Circuit Star | Tech & Business Insights by Queenstar Kyere Gyamfi and used with permission of the author. All other rights reserved by the author.

Development Environments

For this blog, I decided to do some research about development environments. When looking for sources to reference, I came across the article: “Comparison of Development Environments” on Coder’s blog. This blog post goes from simple development environments to more complicated ones.

The article starts off by going into depth about what development environments are. Integrated development environments or IDEs are the center of where developers navigate and edit code. However, the IDE are just one of the components in the development environment. There are the build tools, package managers, system dependencies, and configurations. There are also many development environment architectures as well.

  • Pure local environments: used for single developers or small development teams since it is purely local. Low cost since everything is stored locally
  • Virtual Desktop Infrastructures: the development environment is on a separate remote virtual desktop. Allows better storage across bigger teams and saving in a separate place.
  • Dev Containers: The development environment is packaged into a container . Provides a way to precisely specify the development environment once so everyone can have the same versions and controls on launch every time. I know in class we were able to build our own dev containers and such to match java version and java compiler version.
  • Cloud Development Environment: dedicates, manages, and monitors dev environments.

This blog really helped me to better dive into the deeper areas of development environments. Often when it comes to where code is stored and not just writing it, I get lost and tend to get confused. Seeing the images the article used helped to show what does what and where things are held. Also, as with most things, there are pro and cons to the usages of these environments. An example being Pure local environments being used for single developers or small development teams since it is purely local. This is good because it is low cost but it is hard to work in big development teams since everything is stored locally.

I haven’t had much experience with using different environments and instead have mostly focus on coding itself, but knowing these aspects is crucial when working on projects and software. Knowing to use cloud development environments for big groups and setting up dev containers is very important in proper workflow and to make sure everyone in the team is on the same page. I hope to understand these things pushing forward into my career.

From the blog CS@Worcester – Works for Me by Seth Boudreau and used with permission of the author. All other rights reserved by the author.

Version Control

https://www.geeksforgeeks.org/git/version-control-systems/

In our class, we spent a lot of time exploring Git and the power of version control systems. A Version Control System (VCS) is an essential software tool designed to track and manage changes to source code over time. Its primary function is to maintain a detailed history of a project, allowing developers to record every update, collaborate effectively without overwriting each other’s work, and revert to previous versions if necessary.This article from GeeksforGeeks provides a comprehensive overview of VCS’s in general, explaining what they are, the different types available, and the most popular ones used today.

The article explains how VCS’s come in 3 different forms: Local, Centralized, and Distributed. Local Version Control Systems operate strictly on a single computer, making them suitable only for individual use, though they carry a high risk of data loss if that machine fails. Centralized Version Control Systems solve the collaboration problem by using a single server to store all files and history; however, this creates a single point of failure where server downtime stops all work. Distributed Version Control Systems address this vulnerability by allowing every developer to mirror the entire repository locally. This means that if the server goes down, any client’s repository can be used to restore it, and most operations, such as committing changes, can be done offline before pushing them to a shared server.

Git is a distributed version control system used to track changes in files, especially source code, during software development. This means developers can work offline, make changes, create branches, and experiment without affecting the main project until they are ready to share their updates. Git also provides tools for merging changes from multiple contributors, resolving conflicts, and keeping a clear history of who made each change and why.

Learning Git has been beneficial to me as a new programmer because I can now host, share and update my code in a structured and maintainable manner. Utilizing online platforms that work with Git helps with contributing work to other projects as well as people contributing to mine. I remember in previous classes where we had to work on group coding projects, it was difficult to update and maintain our code as a cohesive unit. We would find ourselves emailing snippets of code back and forth in order to implement new changes. With the knowledge of git and gitlab/github, in future projects I will resort to creating project repositories that can be simultaneously updated and changed while keeping track of all edits and fixes. Also, since these online platforms are widely used and accepted in the programming field, I will have a place to host all personal projects that will build my portfolio for future employers to access. They will be able to see my progress and changes I have made on certain projects so they can see my improvement as a programmer.

From the blog Anna The Dev by Adrianna Frazier and used with permission of the author. All other rights reserved by the author.

The Magic Behind Frameworks

https://aws.amazon.com/what-is/framework

When learning about backend and front end architecture, we got to explore the use of frameworks. A framework is a set of reusable software components like libraries, APIs, tools, that help developers build applications more efficiently; they are the structural “skeleton” of a project. Instead of writing everything from scratch, you build on top of a framework’s building blocks and standardized patterns, letting you focus on the unique parts of your application rather than reinventing foundational pieces. This semester was my first real exposure to using a framework and I decided to explore what exactly it was and how to best use one in future projects. To build on my base level knowledge, I came across this article found on the Amazon Web Services site. 

The article explains how using a framework has several advantages like having better code quality. Since the programmer reuses parts that are typically well tested, the code is more efficient and easier to maintain. Also, having less duplication in code decreases certain code smell and leads to faster development. Having a framework can also aid in better collaborations especially when using the same architecture patterns. I remember thinking of a framework as a library, however this article explains that the difference is that frameworks define the structure and flow of your application, controlling how and when things happen, whereas libraries are just a set of helper functions you can call when needed. 

Frameworks are not just one size fits all. As an aspiring web developer, I will need to know how to utilize web development specific frameworks. For example, frameworks such as Angular, VueJS, and Bootstrap are all common frameworks that are utilized by the top tech companies. Frameworks are extremely useful in web development because they give developers a structured, efficient way to build websites and web applications without starting from scratch. A framework provides prewritten components, templates, and patterns for common tasks like routing, handling requests, managing databases, rendering pages, and securing user data. Many web frameworks include built-in security protections, performance optimizations, and ways to organize files so the project scales as it grows. There are, however, benefits to learning more than one framework especially if it helps with more full stack development. Choosing the right one for your project depends on the requirements. In the future I will need to analyze the scalability, the ecosystems or available libraries, longevity, and speed of development for my project.

From the blog Anna The Dev by Adrianna Frazier and used with permission of the author. All other rights reserved by the author.

Using immutability in conjunction with encapsulation.

Original Blog: https://blog.ploeh.dk/2024/06/12/simpler-encapsulation-with-immutability/

For this post, I’ve decided to write about encapsulation, and how implementing immutability in a program can make implementing encapsulation scalable and overall easier. I chose this topic because it seems like an interesting look at how concepts learned in a course I’m currently taking would appear in projects designed for a large user base.

Encapsulation is a widely used concept of object oriented programming, and for good reason. It allows developers to make efficient methods and functions with only the necessary amount of information, and also makes the scope of variables clear and defined, leading to debugging and problem identification becoming much easier than they would be without the use of encapsulation. A problem with encapsulation, however, is introduced in the blog “Simpler encapsulation with immutability”, by Mark Seemann. Seemann identifies a common difficulty encountered when implementing encapsulation on a large scale: How do we ensure that invariants are included? Invariants are defined as conditions within the program that remain true regardless of changes made to other areas of the code. This can include class representation, loop invariants, or other assumptions about the logic of your program. When implementing encapsulation at a large scale, it can be difficult to preserve this property in as many areas of the code as possible, with function variables taking different states throughout many points in the program.

A solution the author offers is to simply make the object immutable, guaranteeing that through whatever change it may see when the program is executed, it’s value can’t change. The author uses the example of making a program which models a priority list to model to difference in difficulty in preserving invariants, with the invariant being that the sum of the numbers in that list must be 0. Without defining each of the members of the list as immutable, it’s difficult to manually maintain at all times, while if the objects are immutable, you can guarantee that at no point will they not sum to 100.

In summary, the author outlines common problems developers have with implementing encapsulation while preserving invariants at large scales. He then provides the solution of immutability to ensure that at all times, objects desired to be invariants will be unchanging. Some thoughts I have about the blog are if making an object immutable could present some unwanted limit to the developer, and if that were to be the case, is there another solution which preserves invariants at a large scale without ensuring that a condition about an object only sometimes changes.

From the blog CS@Worcester – My first blog by Michael and used with permission of the author. All other rights reserved by the author.

All About Interfaces

When learning about software design and construction, specifically in Java programming, we explored more in depth the implementation and utilization of interfaces. Before this class I had learned about what an interface was, but did not get a good understanding of how powerful it was. I chose this video by youtuber “Keep on coding” to help me understand the different ways an interface is useful and how to best utilize them in my code.

Working with interfaces in Java is essential because interfaces embody core principles of good software design. They allow developers to define behaviors abstractly, separating the “what” from the “how,” reducing complexity and improving clarity. In class we went over different design smells in code. Interfaces combat certain smells by making systems more flexible, easier to modify, and more resilient to change. Interfaces also let code depend on set intentions instead of concrete implementations. When it comes to enabling good object oriented principles, they enable substitution and polymorphism, which are central to many design patterns and architectural styles like Clean Architecture. Additionally, interfaces improve testability by allowing the use of mocks and alternative implementations. 

The video helped expand on this knowledge by walking the viewer through different tutorials that highlights the flexibility of interface implementation. The tutorial covers essential concepts such as defining and implementing interfaces, demonstrating how classes use the implements keyword to adopt an interface and are then required to provide implementations for all its methods. It also illustrates how a single class can implement multiple interfaces. A significant feature discussed is the ability to create interface reference variables, allowing a variable of an interface type to reference objects of any class that implements that interface, which enables polymorphic behavior. Additionally , the video details that variables declared within interfaces are implicitly public, static, and final. It also explains how interfaces can extend other interfaces, inheriting their methods and variables

I wanted to learn more about interfaces because I wanted to be a more proficient java coder. When you are a beginner, you often write code that is tightly tied to specific classes and implementations. I tend to use concrete classes directly, which makes my code rigid and difficult to change later. As seen in our examples of the duck classes, the less efficient architecture allows duplications of similar logic across multiple classes instead of abstracting shared behavior. In my experience I had a tendency to overuse inheritance where an interface would be a better fit.

From the blog Anna The Dev by Adrianna Frazier and used with permission of the author. All other rights reserved by the author.

Rest API Design

One of the topics we have been looking at in class have been rest APIs. Since we were working with APIs for some time I wanted to dive deeper into what they are and what makes a good one. The blog that I saw seemed perfect for this : Everything I know about good API design on SeanGoedecke.com.

One of the major topics that the post touched upon was that a good API design should be familiar and flexible. This mean that for developers who make them, APIs are complex products where lots of effort is put into designing them and polishing them. However, for the people who use them, the API should be familiar so that they know how to use it without lots of documentation. Even reading this makes it seem complicated. I guess I haven’t yet broken into this world of API creation. Basically, it seems like to me it would be a mix of building a simple API design but also make it be as useful as possible. Keeping it simple is not always the best solution but also making things too complicated will make things hard for the team to use.

Another Key topic the article goes over is changing APIs without breaking userspace. Basically what this is saying is that once an API is released and public, making any small change could break everything. Instead the article suggests to use versioning. I know I have working with versioning in class and looking at the different types of changes based on version numbers. For example, we used version with three different numbers as such: 2.4.0 from left to right being more breaking changes to smaller patches. Versioning is a useful way of updating APIs while also maintaining backwards compatibility. Now I really take notice in versions of applications I use and understand what kind of change has been implemented.

These aspects really helped me understand what makes a good API design. Building an API is not only making it functional but also flexible for developers and users. Being able to design APIs in two completely different ways and finding a happy medium is the key to making a good one. Also, being able to and understand versioning and how to not completely destroy your system when making changes. Going forward i hope ill be able to keep these things in mind to make thing predictable while also making the functions complex and useful.

From the blog CS@Worcester – Works for Me by Seth Boudreau and used with permission of the author. All other rights reserved by the author.