Category Archives: Quarter-1

The Real Value of Open Source Software

Open source software is so valuable to society, but how would that value be translated into currency? About 96% of all commercial companies contain code that originates from an open source software (OSS). But according to new articles and papers being written on the subject the amount that these free softwares save these companies is a lot larger than people originally expected. Without it, firms would pay an estimated 3.5 times more to create the software and platforms that run their entire business which in total would be around $8.8 trillion in total. Due to this many companies make sure to express how important it is to hire people who have experience with open source software as it is the backbone of companies. The authors  of the article calculated the estimated value of open source software by combining data from The Census II of Free and Open Source Software and BuiltWith, which is a database that takes millions of websites and identifies technologies they use. To estimate the cost of recreating these open source projects from scratch, the researchers analyzed the number of lines of code in each project and used the COCOMO II cost estimation model, that predicts development effort and expense based on the size of the code and how complex the code is. They then adjusted these amounts using regional wage data to show how much it would cost in labor throughout the world. This allowed them to estimate that both the cost it would take to replace all of the software would be around $4.2 billion and the total value of open source software being used in the world is almost $8.8 trillion. The reason I used this specific article was because of our current discussions of open source software and how important it is to not only people who are in tech, but also people who are less fortunate and need the resources that this free software gives them. So to see an article that explains what the commercial value this software provides around the world and how much it is actually worth is shocking and goes along perfectly with the importance we saw of it in class. The article itself was very interesting to read about as well. To think about how much it would actually cost to create this software is absurd and really opens my eyes as to how large it actually is. Before reading I believed that it was helpful, but I didn’t know the importance it has in actual companies. As well as how helpful it is to have experience in open source software when it comes to jobs as well. I am definitely going to do more research into different software and hopefully it helps me with my future career.

https://www.library.hbs.edu/working-knowledge/open-source-software-the-nine-trillion-resource-companies-take-for-granted

From the blog Thanas CS343 Blog by tlara1f9a6bfb54 and used with permission of the author. All other rights reserved by the author.

Blog Post for Quarter 1

October 5th, 2025

Recently, I’ve been working on using Git. This came in the form of using a little bit of GitHub, a little bit of GitLab, and the textbook made to help instruct me on how to use them. I began to learn the basics of repositories, and how to make edits and pull requests.

For example, I am now able to create a fork from a repository, create a remote origin, a clone of that onto my local device, make edits using Visual Studio Code, stage those edits, commit those edits, push those edits back to my remote origin, then establish a pull request. And a little bit more. So far, this Quarter of my class has been pretty interesting. I will note how the public changes and such are very interesting to me. I don’t usually like being in public places often because I assume I should be competent before doing anything in public, but oh well.

For the blog post I selected, I wanted to know just a smidge more about Git, since I was learning about it. (This will be linked below at the bottom of this post.) It mostly discusses the future plans for Git and potential for AI.

Honestly, there isn’t much for me to really do from this. I just note how AI appears to be Git stuff I currently use, so that’s mildly interesting. Though, I found it interesting that Git, much like I, is currently developing. It’s fun to think about how I learn and improve as a person that other things in the world are doing the same. While they are wildly different contexts, I find it cool. Everything is always changing and such. Even as I learn, I make notes that aren’t in the textbook to myself. Git will probably always get things added by various different people as well.

It encourages and intimidates me in some way. It’s very cool that by the time I “get caught up” it’ll be better than what I’m currently using, but at the same time, what if everything I learned becomes redundant? Though, for me, I was always a person to enjoy experiences, and I never really liked the idea of a limit. If anything, it is just more “fun” for me. There’s more to learn and I’ll never be caught up. My experiences will lead to my growth, so when the time comes, I’ll be much more suited to using the new tools that arrive.

https://github.blog/open-source/whats-next-for-git-20-years-in-the-community-is-still-pushing-forward/

From the blog CS@Worcester – Ryan's Blog Maybe. by Ryan N and used with permission of the author. All other rights reserved by the author.

What the “Grug Brained Developer” Teaches Us About Software Process Managment

Link: The Grug Brained Developer

For my self-directed professional development blog, I chose to read The Grug Brained Developer by Carson Gross, who founded the Hypermedia Research Group at Montana State. On the website, a free, digital version of his book, Gross manages to distill complex software wisdom into simple, almost primitive advice. Written like a caveman developer, it explores core ideas and fundamentals of coding and communication in a way that’s both funny and accurate.

The article’s central theme is simplicity. Grug constantly emphasizes avoiding unnecessary complexity in both design and process. He warns against over-engineering, over-abstracting, and blindly following trends—ideas that directly connect to what we’ve discussed in CS-348 about software process management principles like iterative improvement, communication, and agile adaptability. In short, Grug argues that most of software development isn’t about doing more, but about doing less and doing it better.

I selected this resource because it presents timeless software management lessons through a format that’s approachable and memorable. Rather than reading another dry technical essay on process maturity or workflow optimization, this post made me think deeply about how simplicity affects process management. In a course where we analyze methodologies like Scrum and Kanban, it’s easy to lose sight of the human side of software development—how teams actually think, communicate, and make decisions. The Grug Brain post brings that perspective back.

What stood out to me most was the section where Grug says, “If no can understand code, no one can fix. Then bug live forever.” This humorous line perfectly encapsulates why clear process and communication are essential to good software engineering. It aligns with our course’s emphasis on maintainability and collaboration. A well-defined process isn’t just bureaucracy—it’s what ensures that projects remain sustainable when team members change or systems evolve.

I also learned how simplicity ties into continuous improvement. In agile environments, each sprint is a chance to refine not just code, but the process itself. Grug’s idea that “small simple thing better than big clever thing” echoes this perfectly: effective software process management focuses on clarity, iteration, and team alignment over cleverness or premature optimization.

Reflecting on this, I realized that I often over complicate my own projects—whether by designing too many abstractions or worrying too much about tools instead of workflow. Going forward, I plan to apply Grug’s philosophy by prioritizing clarity in both my code and my project management habits. That means writing simpler documentation, refining processes only when necessary, and valuing human understanding above technical elegance.

In summary, The Grug Brained Developer provides a surprisingly profound take on process management: simplicity and communication are the real foundations of sustainable software development. I’ll carry this mindset into my future work, reminding myself that even in a world of complex methodologies, sometimes the best process is the simplest one.

From the blog CS@Worcester – My Coding Blog by Jared Delaney and used with permission of the author. All other rights reserved by the author.

Navigating UML

I read the article Navigating Complex UML Diagrams: Tips and Tricks for Developers. This gives a lot of insights into practical strategies that you can use to maintain and understand UML Diagrams, specifically larger ones. This article focused on UML stereotypes and the organization of diagrams. UML stereotypes are an extension of standard UML elements. Developers use these to add more specific definitions. Stereotypes allow you to label a class, an interface, or a service component, etc.  This adds a layer to help the diagram effectively communicate the structure and intention of the code. The article also suggested using color coding for groups to emphasize relationships. Keeping your layout styles and naming conventions consistent and clear is also very important. UML allows architects and developers to show both the structure and behavior. These diagrams can be extremely important in a collaborative environment. Having a clear UML diagram would keep everyone on the same page and help avoid having to rework code. UML can also help show off early architectural flaws before you put time into a project. Since you can visualize the relations, dependencies, coupling, and flow before any of it is implemented, you can prevent issues. Many of my issues with coding come from the planning aspect of it. I plan on continuing to get better at making these diagrams because I feel they can have a very positive impact on my coding. It can be difficult to picture how different parts of a program will come together, and it can become very overwhelming very quickly. It’s also interesting how different UML diagrams serve unique purposes. Visual Paradigms, UML Practice Guide, says structural diagrams like class, component, and package diagrams are good for showing how different parts of a system fit together, while behavioral diagrams like sequence and activity show how things interact over time. With this separation, a developer can handle one part of the design at a time, instead of trying to understand everything at once. There also exist Artificial intelligence tools that can support UML. PlantText AI, Visual Paradigm Smart Assistant, and PlantText AI can help save time and reduce manual errors. In the future, I’m curious if this is something that will still be manually done, or if AI will spread deep into this too. I can’t see these diagrams ever losing importance, but I do question if humans are going to be directly making these in the future. Either way,  I am excited to practice these techniques and implement UML diagrams into my planning.

https://moldstud.com/articles/p-navigating-complex-uml-diagrams-tips-and-tricks-for-developers

From the blog CS@Worcester – Aaron Nanos Software Blog by Aaron Nano and used with permission of the author. All other rights reserved by the author.

VCS Safety Net: Protecting Code and Empowering Collaboration

For this quarter’s blog post, I chose to deepen my understanding of Version Control systems (VCS), specifically how tools like Git and platforms like GitHub and GitLab function within the software development lifecycle. Learning new tools and techniques takes continued practice, and learning Git was no different. My goal with this content is to strengthen my knowledge of the core concepts and purpose behind these vital tools, which is what my Software Process Management course is currently emphasizing working with and practicing.

I focused on a website from GitHub itself, which explained what version control is. I chose this resource because it provides a foundational explanation of VCS, its types, and current popular tools. This article, “What is Version Control?”, defines VCS as systems that give members working on the same project complete visibility into the history of the code, and centralize all members’ work. It describes how Distributed Version Control Systems (DVCS), like Git, are essential for software development. It explains key concepts including the ability to track every change, work independently, propose code additions, and safely integrate changes while preventing conflicts. In other words, the process of commits, branching, pull requests, and merging; all fundamental processes we are focusing on in my course.

Before this course, and diving into resources, my understanding of Git was minimal. I gained a much deeper grasp of how VCS tools are both seen as a safety net and a coordination tool. Branching is one of the most important steps within the software development process, as it allows a team (of however many or little) to work synchronously on a project without fear of overwriting or corrupting the main code. Allowing team members to work independently, cohesively, and in a time-efficient manner; while also being able to access and review modifications made by other members via accessing their branch of the project. 

Although all of the information is important, the section on best practices particularly resonated with me. While working with Git, we are encouraged to save changes in small increments, making sure to write a helpful commit message, rather than making large changes and saving at the end. Seeing this best practice emphasized in the article reinforced its importance. Using this technique significantly reduces conflicts and makes debugging errors and explaining changes much simpler. 

I expect that I will continue using Git throughout my professional career, and I plan to apply this understanding immediately as I continue working with Git in class. My goal is to use this knowledge and helpful techniques to practice improving my execution and workflow. I will prioritize working in small increments, committing those small changes, and reviewing my changes to ensure my progress is meeting expectations and to also contribute to the team’s collaboration.

Link to Main Resource:
https://github.com/resources/articles/software-development/what-is-version-control – What is Version Control?

Link to Additional Resources:
https://www.cbh.com/insights/articles/collaboration-tools-changing-the-workplace-landscape/ – Collaboration Tools Changing the Landscape of the Workplace

https://www.planview.com/resources/articles/software-development-collaboration-tools-a-detailed-buyers-guide/ – Software Development Collaboration Tools: A Buyer’s Guide for Empowering Agile Teams

https://fullscale.io/blog/benefits-collaborative-software-development/ – The Benefits of Collaborative Software Development

https://www.geeksforgeeks.org/git/version-control-systems/# – Version Control Systems

https://www.geeksforgeeks.org/git/what-is-git/ – What is Git?

https://github.com/resources/articles/software-development/what-is-software-development – What is Software Development?

From the blog CS@Worcester – Vision Create Innovate by Elizabeth Baker and used with permission of the author. All other rights reserved by the author.

UML Diagrams…Why?

For this quarter’s blog post, I chose to deepen my understanding of Unified Modeling Language (UML) Diagrams, which directly relates to current coursework in my Software Construction, Design, and Architecture class of translating between code and visual diagrams (i.e., UML class and sequence). Initially, I found both processes overwhelming and questioned the purpose of using such diagrams instead of simply reviewing the source code step-by-step. To overcome this hurdle, I looked at several resources, but I will focus on Miro’s comprehensive guide, “The Ultimate Guide to UML Diagrams,” which provided much needed clarity on the concept of UML Diagrams.

This guide offers an excellent foundational overview, and emphasizes UML Diagrams as the commonly used and encouraged visual language in software development. It identifies the 14 types of diagrams and categorizes them as either Structural or Behavioral. Structural diagrams are used to define the components of the code, while Behavioral diagrams are used to examine how the code operates over time.

A key part of this research involved understanding the drawbacks of using UML diagrams. With my own initial experience being overwhelming, complex, and tedious; it validated the discourse surrounding the love-hate relationship with UML in the software development field. Disadvantages often centered on the process being time-consuming, complex, overwhelming, and potentially ambiguous, especially as projects grow or when team members and stakeholders are not aware of coding and diagram literacy. While arguments exist for making these diagrams optional, I understand that this is a necessary and helpful step in professional practice. 

This and other resources consistently emphasize one core objective: UML Diagrams are primarily communication tools. While recognizing their flexibility, standardization, and (often) simplicity, their greatest benefit is serving as a visual aid. They create a working summary of a program/code that allows other team members and stakeholders, who may have limited time and/or specific knowledge, to look through hundreds of lines of code, to quickly grasp the architecture and operational flow.

I also learned that the perceived disadvantages of UML are the trade-off required for effective team collaboration and risk mitigation. When working through class activities and homework, working with smaller codes, I experienced some of the limitations. I fully understand how a program with 50+ classes would be completely overwhelming and time-consuming to look through and explain without an established visual reference. My personal practice of using UML class and sequence diagrams showed me the tediousness of detailing every code component, but also the value of creating and having a visual summary of the code’s building blocks.

In my future practice, I intend to apply this knowledge by creating diagrams to help me summarize the code. Whether working on class activities, homework, personal projects, and/or within a development team; I will use UML diagrams to practice summarizing and communicating code as if I were speaking with a team and/or non-technical stakeholders. Ultimately, a diagram is easier to critique and comprehend than 500+ lines of unread code spread across multiple files. 

Link To Main Resource:
https://miro.com/diagramming/what-is-a-uml-diagram/ – The Ultimate Guide to UML Diagrams

Link To Additional Resources:
https://www.theknowledgeacademy.com/blog/advantages-and-disadvantages-of-uml/ – Advantages and Disadvantages of UML: An In-Depth Analysis 

https://creately.com/guides/sequence-diagram-tutorial/#what-is-a-sequence-diagram – Sequence Diagram Tutorial – Complete Guide with Examples

https://creately.com/blog/diagrams/uml-diagram-types-examples/#UseCaseDiagram – UML Diagram Types Guide: Learn About All Types of UML Diagrams with Examples

https://creately.com/guides/advantages-and-disadvantages-of-uml/ – Why the Software Industry Has a Love-Hate Relationship with UML Diagrams

https://www.synergycodes.com/blog/why-use-uml-class-diagrams – Why Use UML Class Diagrams?

From the blog CS@Worcester – Vision Create Innovate by Elizabeth Baker and used with permission of the author. All other rights reserved by the author.

Blog & Podcast Discovery – Software Architecture

Resource Selected:
“Patterns of Legacy Displacement” by Ian Cartwright, Rob Horn, and James Lewis
Published on Martin Fowler’s Software Architecture site
https://martinfowler.com/architecture/


Summary of the Resource

This article explores the practical challenges of replacing legacy systems in large-scale organizations. The authors introduce the concept of the “legacy replacement treadmill,” a cycle in which enterprises launch extensive modernization efforts that often stall or fail before meaningful progress is made. The authors argue that the core issues extend beyond outdated technology to include organizational and leadership shortcomings. To address these challenges, they recommend setting clear goals, delivering improvements in small, manageable increments, and avoiding “big-bang” system rewrites which typically lead to failure. Instead, they propose a more sustainable approach: gradually isolating and updating parts of the legacy system, delivering functional components, and slowly retiring the outdated codebase.


Why I Chose This Resource

I selected this article because it directly relates to key principles we’ve discussed in our software design and architecture course, such as maintainability, technical debt, and scalable architecture. Rather than just presenting theory, the article demonstrates real-world consequences when these principles are overlooked. Additionally, I’ve observed similar patterns on a smaller scale during team projects, where the instinct to start from scratch often feels easier than improving existing code. This resource provides a practical, realistic alternative to that approach.


Reflection and Key Takeaways

The most impactful lesson for me is that modernization should not be seen as a rapid technical overhaul but as a long-term, structured effort. The authors emphasize defining clear outcomes before writing any code a concept that shifted my perspective. In future projects, I aim to prioritize measurable objectives over vague goals like “make it better.”

Another important insight is the risk of pursuing feature parity with legacy systems. Attempting to replicate every existing feature can hinder innovation and slow down progress.

The article also deepened my understanding of how human factors such as team habits, organizational power structures, and internal priorities can influence the success or failure of architectural changes. Even the best-designed technical solutions will struggle to succeed without cultural alignment and support for gradual transformation.

Moving forward, I intend to apply this mindset to legacy code by introducing incremental, testable changes that improve the system’s architecture over time. This approach may be slower, but it is ultimately more effective and sustainable.


Link to the Resource:
https://martinfowler.com/architecture/

From the blog Zacharys Computer Science Blog by Zachary Kimball and used with permission of the author. All other rights reserved by the author.

Understanding and Embracing YAGNI

Link: https://codibly.com/blog/articles/yagni-how-to-do-things-when-you-actually-need-them-to-be-done

The blog post “YAGNI – how to do things WHEN you actually need them to be done” goes over the YAGNI (You Ain’t Gonna Need It) principle and why it is necessary as a guardrail against over-engineering in software development. The blog starts off by explaining the origins of YAGNI, as it originated from eXtreme Programming (XP) used in agile software development teams. Essentially, YAGNI should be used so that developers can resist the urge to implement features that are not necessary or needed. The blog compares YAGNI to KISS (Keep It Simple, Stupid), as while KISS advocates for more simplicity overall, YAGNI is more focused on discouraging unnecessary functionalities. The blog also goes over the risks of over-engineering, as it can lead to more bugs and simply be a burden when it comes to maintenance. Furthermore, it can also just lead to making your code way more complex for no reason. In the end though, in order to apply YAGNI in a responsible way it requires good judgement, as some additions are harmless as long as they don’t increase the complexity, but generally speaking it is better to keep it clean and simple.

I picked this blog post because I think that this is very important practice that will apply to a workplace environment. You always want to plan ahead and implement features that may be needed in the future, but overdoing it is not good. It is very important to find a balance, as doing too much of either can lead to big problems. For our course, this blog post also covers other software design principles, as well as some agile practices too. I think that all of these principles are very important for when it comes to working in a team environment, which is something that I will most likely have to do in the future. In a team environment, it is important to make sure your code is not complex, as other people will have to read it and potentially debug it as well.

Going forward, I plan on applying YAGNI principles to my current code as well as any code that I work on in the future. This blog gave me a good reminder that just because we might need an abstraction in the future, that doesn’t mean that we have to add it now. This can just lead to more maintenance, bugs, and just unnecessary complexity. I can apply these principles favoring simple versions of programs, as well as consistently reevaulating the requirements of a program. Overall, this blog post on YAGNI gives a great view and perspective on a principle that is very important in software design.

From the blog CS@Worcester – Coding Canvas by Sean Wang and used with permission of the author. All other rights reserved by the author.

Mastering OOP Fundamentals with SOLID Principles – ByteByteGo

The blog post/article I choose to read and write about is Mastering OOP Fundamentals with SOLID Principles from the ByteByteGo blog page. This blogpost goes into many aspects of the OOP programming, some of which we’ve discusses in class. The first portion delves into Encapsulation, Abstraction, Inheritance, and Polymorphism. It explores some key concepts like single inheritance, multiple inheritance, multilevel inheritance, hierarchical inheritance, method overloading, method overriding, etc. It explains how these 4 fundamentals are important for creating and utilizing OOP effectively, but in and of itself doesn’t necessarily create code that is easy to work with and maintain.

That’s why it introduces the principles of SOLID, which according to the syllabus is something we will eventually touch on in the future. The S is for Single Responsibility Principle, stating that classes should only have a single reason to change. This ensures better organization, easier debugging, and improved testability, so it’s better to split a complex class into multiple simplified classes. The O stands for Open/Closed Principle, stating that classes should be open for extension, but closed for modification. Essentially, if we want to add new behaviors to a class, new subclasses or interfaces should be added without changing what already exists. The L stands for Liskov Substitution Principle, stating that “objects of a derived class must be replaceable by objects of the bass class without altering the program’s correctness.” A subclass shouldn’t break existing functionality while behaving like it’s parent class. This somewhat relates to the interface portion of what we did in class. The I stands for Interface Segregation Principle, stating that clients should not be forced to depend on unused interfaces. Essentially, interfaces should be small and specific, with only relevant methods. Finally, the D of SOLID stands for Dependency Inversion Principle. This states that high-level modules and low-level modules should depend on abstractions, rather than each other. This can help improve flexibility and mobility, so that it’s easier to test and work with without making as many modifications.

The reason I chose this blogpost/article is because it directly relates to what we’ve learned with the fundamentals of OOP so far, as well as introducing me to something that’s planned for the syllabus. I also saw it as a good primer for Java and OOP thinking to help me better understand the general ideas and concepts that hold it up.

Even though a good portion of what was written is just reiterating some of what we’ve discussed in class, I found it really helpful to have things explained another way with the examples the blogpost gave. It helped me better grasp the purposes behind these fundamental principles and ideas in a way that felt easily digestible. The SOLID portion was also interesting, and everything intuitively makes sense. I can definitely see myself referring back to this and sticking to these ideas as I do OOP programming in the future, because it genuinely does seem to make the code easier to work with and easier to understand.

From the blog CS@Worcester – Site Title by Justin Lam and used with permission of the author. All other rights reserved by the author.

Design Patterns in Software Construction

Hi, 

For this blog post I have chosen the topic design patterns in software construction. I watched the video linked below which is titled Design Patterns in Plain English | Mosh Hamedani by Mosh which does a great job of showing how design patterns in software construction work. https://youtu.be/NU_1StN5Tkk?si=aFdc2v01YIvoGq0m

Firstly, we must understand what exactly a design pattern is with respect to code. Design patterns are reusable solutions to common problems in code. From here we can make the inference that the goal of a design pattern is to build reusable and extensible software. To help us achieve this, there are three main categories which are creational, structural and behavioral patterns. Creational patterns regard different ways to create objects. Structural patterns are about the relationship between those objects. Behavioral patterns are about the interaction/communication between objects. From these categories we would choose which one to use based on the specific problem to implement the design pattern.

To help with the process of a design pattern, you would want to use a UML diagram to better visualize and understand the whole code. This way you will be able to see the type of relationships from one body to another; whether it’s inheritance, composition, and/or dependency to name a few. Something you may see on the UML diagram could be an interface.

An interface is a powerful tool that is often used in design patterns. Furthermore, they promote loosely coupled apps and flexibility. With an interface in the design pattern it strives substantially towards the overall goal which is to build reusable and extensible software.

Mosh then goes on to discuss and show the four principles of object-oriented programming, which are encapsulation, abstraction, inheritance, and polymorphism. Encapsulation deals with bundling data and the methods that operate on it into a single unit or class and hiding the values or state of an object within a class. In coding, abstraction is in regards to reducing complexities by hiding unnecessary details in classes. A great example of abstraction to better understand is thinking of a tv remote control; it shows only what you need to see. The next one is inheritance. Inheritance is a mechanism for reusing code across classes including common behavior. Try thinking of a parent to a child class. The child class inherits the parent class, but also has instances and methods of their own. Polymorphism is the ability of a single object to take many different forms. Mosh stresses that you must know and understand these principles as they are crucial to building design patterns.

Some takeaways are understand the problem you are facing first, have a strong foundation in OOP, and know when not to use a design pattern. Furthermore, creating multiple classes or a new interface just for one or two actions of a given state is simply not worth it; go with the else if statement.

From the blog CS@Worcester – Programming with Santiago by Santiago Donadio and used with permission of the author. All other rights reserved by the author.