Category Archives: CS-348

Blog Post 2

For this blog entry I decided to choose something related to Agile. In the blogpost, Agile Principles: The Complete Guide To All 12 Principles For 2025 written by Sean O’Connor, he goes over how Agile’s flexibility gives it real world benefits for businesses, development teams, and customers. Much like how with what we went over in class, Sean O’Connor explains that Agile’s strengths aren’t so much about speed of delivery of a product, but in it’s efficiency in producing a working project, with speed as positive byproduct. As someone who’s never been good at long term planning, what was explained in the blogpost really resonated with me. Things like catching problems early when it hasn’t become a core “feature” of the software, building what the customer actually needs rather then wasting time, and especially the part of creating quick wins, are some things that I value in a game plan. The blogpost then goes into 12 principles that makeup all Agile methodologies, a couple of which really stood out to me.

One of the principles was delivering working software frequently. I like the idea of real progress you can actually see and use, not just what it could potentially be. It’s nice when you get to see something that actually works early, even if it’s just a rough version. This approach makes more sense to me than spending weeks or months planning something that might not even be what was wanted, which is just a huge waste of time and energy. So to me being able to get instant feedback, learn faster, and fix problems before they turn into something major, is huge.

Another principle that ties into that is measuring progress by working software. That line really sums up Agile’s idea of “cutting out the chaff”. It’s easy to get caught up in planning, documentation, or presentations that sound good but don’t actually move the project forward. A working prototype says more than any report could. I think that same logic applies to class projects too, seeing your code actually run feels like a real measure of progress, not how well it’s outlined on paper.

The principle of simplicity, or “maximizing the amount of work not done,” also hit home for me. It’s about focusing on what truly matters and using just enough time and energy needed. I tend to overthink projects at the start, but this mindset reminds me that getting the essentials working is more productive than building unnecessary features. Fewer moving parts also means fewer ways for things to break (something I know all to well).

All in all, it’s reassuring to see that my “slacker mindset” can be the basis for something actually productive.

From the blog CS@Worcester – CS Notes Blog by bluu1 and used with permission of the author. All other rights reserved by the author.

Effective API Design

I have been reading one of the articles by Martin Fowler titled APIs: Principles and Best Practices to Design Robust Interfaces. It discusses how API, or small bridges that are known as Application Programming Interfaces, enable various software systems to communicate and keep up with one another. Fowler emphasizes such points as the clarity of words, offering simplicity, being consistent, and not having to break old versions, and supports it with real code demos and real-life scenarios. It is a combination of theory and practical tips, so every person, who is interested in software design, can dive in.

I picked the read as the API design is one of the foundations of software engineering and intersects my course on Software Development and Integration. I prefer scalable apps that keep their heads clean and easy to be connected to by other developers. Exploring the work of Fowler was my form of education on how to create interfaces with sound principles of how to get folks to jump on and expand therein with no hassle. Most of the stuff that remains all theory-heavy is not so in this article but instead it presents actual, practical tactics, just what one needs at school and in the job.

The importance of versioning and maintaining backward compatibility was one of the largest things that I took away in the article. Fowler gives a reminder that APIs must evolve, but not exist to ruin other clients, which will require you to plan, test, and discuss with your users. That resonated with me as in group projects I had done before, a minor change to our module could bring down the line. Upon reflection, the well-planned API design rules seem to be the instinctive means of preventing such headaches and wasting less time.

I also liked the fact that Fowler emphasized intuitive naming and consistency. According to him, the more predictable the method names, parameters and endpoints are, the friendlier an API is. It actually saves a fair deal of time to establish a proper structure and hierarchy and results in a significant reduction of mix-ups, accelerates integration, and makes the entire process of dev enjoyable. I have remembered that a considerate design ensures not only the end user as well as the people who actually create with the API, but the ecosystem becomes efficient and simpler to maintain.

In the future, I will apply these tricks to my class projects and whatever work I happen to do in the profession. Whenever I create an API to support the web application or integrate more third-party services, I will focus on clean documentation, predictability, and retaining older versions. Following these rules, I will deliver interfaces that are great, that are easy to maintain, that will assist other developers and that will survive update. This paper has made me even more respectful of the discipline of API design, and I am willing to put these tangible strategies to immediate use.

From the blog CS@Worcester – Site Title by Yousef Hassan and used with permission of the author. All other rights reserved by the author.

Quarterly Blog 2

From the blog CS@Worcester – dipeshbhattaprofile by Dipesh Bhatta and used with permission of the author. All other rights reserved by the author.

Importance of version control in the process of development

An infographic illustrating version control processes in Git, showcasing key operations like fork, merge, and pull request.

As a software developer version control you will undoubtedly run into version control of any projects which you are working on. Eventually a developer will have to fix bugs or add a feature to a product. In order to learn more about version control there is no better website to learn from than Github.

What is Version Control?

Illustration of distributed version control system showing interactions between developers and the main repository.

Github gives an amazing allegory: Imagine you’re a violinist in a 100-piece orchestra, but you and the other musicians can’t see the conductor or hear one another. Instead of synchronized instruments playing music, the result is just noise.

Version control is a tool used to prevent this noise from happening. It helps streamline development, keep track of any changes, and allow for upscaling of projects.

Version Control tool factors

Version control may not be necessary depending on the scale of your project, however most of the time it is useful to have it set up. Some of the factors of deciding to use version control include:

  • Scalability: Large projects with many developers and files benefit from VC
  • Ease of Use: User friendly UI helps manage learning curves and adoption.
  • Collaboration features: Supporting multiple contributors and communication between them.
  • Integration with existing tools: Using tools everyone already has access to.
  • Supports branching: Ability for developers to work on different parts of development benefits a project greatly.

Common Version Control pplications

  • Git: Git is an open-source distributed version control tool preferred by developers for its speed, flexibility, and because contributors can work on the same codebase simultaneously.
  • Subversion (SVN): Subversion is a centralized version control tool used by enterprise teams and is known for its speed and scalability.
  • Azure DevOps Server: Previously known as Microsoft Team Foundation Server (TFS), Azure DevOps Server is a set of modern development services, a centralized version control, and reporting system hosted on-premises.
  • Mercurial: Like Git in scalability and flexibility, Mercurial is a distributed version control system.
  • Perforce: Used in large-scale software development projects, Perforce is a centralized version control system valued for its simplicity and ease of use.

Final thoughts

Every developer has at one point heard of Git, and without a doubt it may be one of the best developer tool ever invented. I have prior experience using version control but this research was an important refresher to learn from. If you wish to learn directly from Github you can read the article this blog was inspired by here.

From the blog CS@Worcester – Petraq Mele blog posts by Petraq Mele and used with permission of the author. All other rights reserved by the author.

Word wide outage

https://www.cnn.com/2025/10/25/tech/aws-outage-cause

The cloud AWS provider experienced a massive outage on oct 20, 2025 that shut down or impacted many of the most popular services and products on the internet like Roblox, and snapchat. The outage was so large many systems and product became unusable. This issue stemmed from a DNS issue where multiple automated systems were trying to update the same DNS entry which then threw a empty field. This empty field then was carried down to other services like EC2 which then caused those to fail and further down into other workflows and other systems that relied on services like EC2. Those failures carried down to Network balancers which essentially snowballed into a enormous mess that wrecked many apps and services. I personally clocked into work with our error handling system and alerts absolutely flooded will alarms and alerts. This became so bad my boss told me to just ignore the errors for the day (One of my sprint tasks is to keep alerts down). I chose this article due to its relevance to testing software and the probable use of git to find out when and where the error occurred. As the use of git bisect could be used to troubleshoot where the issue was introduced into the workflow to find out where the DNS field was assigned to two systems that could trigger at the same time. Furthermore AWS in their statement to the public states that “We know this event impacted many customers in significant ways, We will do everything we can to learn from this event and use it to improve our availability even further” This statement aligns with the values of scrum since it prioritizes openness and respect which AWS gives to its customers by openly documenting the issue that occurred while at the same time stressing the communication with the customers to reassure them in their recovery from the failure. They also follow the AGILE manifesto as they are responding to change over developing a plan since their customers need their services back now, prioritizing getting their services back online rather than documenting and collaborating with customers to have their systems restored. Delivering that they are only releasing their documentation once the system was fully restored under 15 hours showing that they needed to deliver on working software. In a way they could also be benefiting from focusing on individuals and interactions over tools since this error occurred at multiple levels but started at the same point so one tool or process couldn’t immediately understand the root cause.

From the blog CS@Worcester – Aidan's Cybersection by Aidan Novia and used with permission of the author. All other rights reserved by the author.

Quarter 2 Blog Post for CS-348

My chosen source for my 2nd blog entry for CS-348 is: https://www.sciencedirect.com/science/article/pii/S0950584922001884#abs0001

Written in 2023, this article from ScienceDirect surrounds a set of 182 survey’s conducted on scrum team members to determine the influence maturity has on the effectiveness and success of a scrum team and their project. The relevance to seem is fairly self evident as college students can be a mixture of mature and immature which the article suggests could impact effectiveness of scrum teams in the classroom.

This leads me onto why I chose this article as my source, In an environment like our classroom, the maturity of the students and or even professor(s) involved can, as said in the relevance portion, could impact our scrum team effectiveness and efficiency. I decided this was a strong connection to make as people new to the scrum framework, cause while understanding scrum and knowing how it works is important, I do think team member maturity is important as well as it has effected my ability to work in teams in the past and be effective and efficient.

I like that this article provides a good explanation of scrum and what it’s advantages are, it gives context for those unfamiliar with scrum to understand where the article is headed, although I would argue you don’t need the context as the study this article dives into can most certainly apply to any kind of framework out there. As for reflecting on this material I want to pull back to where I mentioned the maturity of my teams in past group projects and tasks have most certainly effected the ability of myself and my team to be effective and efficient at completing our goals, this is initially why this material called out to me as I read it. Though as I read it, I started to compare those experiences with what we’re doing in CS-348 currently. While not working in scrum teams, our teams of 4 for working on our class exercises to learn scrum are a good comparison towards maturity’s impact on our ability to finish the work in a reasonable amount of time.
For example, my first team, we were decently good at getting our work done in a good period of time, but I will admit sometimes we could be a bit slow as we let our maturity slip up. Once we mixed the teams around I saw how much more maturity mattered as we all adjusted to the effectiveness of all our new team members. Some teams got slower, some got faster, while I’m not pinning the reason entirely on maturity, I do think it’s a large influence. I also think its important that our teams got swapped around not just so we learned to work with new people but so we could learn to properly adjust and adapt to our changing environments.

From the blog CS@Worcester – Splaine CS Blog by Brady Splaine and used with permission of the author. All other rights reserved by the author.

Developing Teams: The Importance of a Strong Software Team

For this quarter’s blog post, I further explored the importance of developing a strong software team. The resource I chose is an article called “Better Software Engineering teams — Structures, roles, responsibilities and comparison with common approaches.” I chose this resource because it offers insights into how groups are organized for success, as well as the drawbacks of specific approaches that are used in current day. Building strong software development teams is a topic that ties directly into my current course, Software Process Management, discussions on team structures and project methodologies.

This article identifies and describes core characteristics of high-performing software engineering teams. It stresses the importance of traits like clearly defined roles, balanced seniority, responsibility and ownership, reasonable size teams, trust, transparency, strong leadership for effective team cohesion, executing and developing skills, to build and maintain a team environment of honesty and shared responsibility. 

This article strongly emphasizes the importance of having small and manageable sized teams, and provides a visual to better understand how bigger teams equal to higher chances of miscommunication and reduced efficiency.

Two common flawed approaches are analyzed (Single-Discipline Teams and Manager-Led Cross-Functional Teams) and compared to the optimal approach: Self-Managed Cross-Functional Team, via strong pros and cons lists. The key takeaway of this is to express that when team members understand their contributions to company and project goals, they are better equipped (with hard and soft-skills) to organize themselves and produce results efficiently.

This resource also reiterates the importance of team structure and team size, something of which I am directly experiencing within my POGIL-approach class, and class discussions on Agile and Scrum methodologies. This article provides the professional, industry-level validation for this very concept: the benefits of small, cross-functional groups for project completion.

My takeaway from this article is the benefits of self-managed cross-functional teams, and how unbalanced teams can lead to an increase of overworked members, lack of team cohesion, and decreased value and motivation into the project development and goal. Group work can be challenging, and even more so when the core values of a functional and strong team are absent. The article also emphasized the importance of strong leadership, as poor and ineffective leadership can be the root cause of common issues like “quiet quitting,” where team members feel the only way to bring attention to the poor leadership environment. This challenge is present, much too common, in fields outside of software development teams. 

As someone who sometimes struggles in team settings, due to worries about offering incorrect and simple input, I will make sure to understand the value I contribute to the team and not steer from contributing my knowledge. Even if my contributions are “simple” or “incorrect”, this will provide an opportunity to connect with and learn from my teammates, build an environment of trust and open communication, and strengthen my own leadership skills. As leadership does not start and end with the managers/leads of the team.

Main Resource:
https://medium.com/geekculture/better-software-engineering-teams-structures-roles-responsibilities-and-comparison-with-common-fb5c3161c13d – Better Software Engineering teams — Structures, roles, responsibilities and comparison with common approaches.

Additional Resources:
https://www.inc.com/peter-economy/why-quiet-quitting-is-actually-a-leadership-failure-and-how-to-fix-it/91181524 – Why Quiet Quitting Is Actually a Leadership Failure — and How to Fix It. Quiet quitting is a virus that affects entire organizations. Where it stems from might surprise you.

From the blog CS@Worcester – Vision Create Innovate by Elizabeth Baker and used with permission of the author. All other rights reserved by the author.

Types of Software Testing: Ensuring Quality in Development

Within the Software Development Lifecycle, it has a major step that takes a decent amount to do is Software testing. The purpose of software testing is that the program has no problems and meets the requirements set by the customer. Software development is divided into two different types: verification and validation. As noted before, verification is a step that has programmers checking if the software is doing as the customer intended. In the same vein, the validation step is just programmers testing if the software meets the customer’s requirements. For example a website wants to add a new feature to handle an increase of daily users on the platform. 

Moving on, there are multiple different types of testing programmers use to validate and verify if the program is working as intended. There is automation testing, and manual testing. They then divide into smaller more specific tests that will focus on certain aspects of the program. As mentioned earlier, automation testing is when programmers write test scripts with or without software and can run the tests repeatedly. While manual testing is a method of testing that has a programmer write tests that will check on different sections of the program.

Within software testing there are different levels of testing. They are called unit testing, integration testing, system testing, acceptance testing. 

Unit Testing

  • It is a testing that checks every component or software 
  • It is to make sure the hardware or software that could be used by the programmers 

Integration Testing 

  • It checks two or more modules which are tested are then put in the program or hardware 
  • To make sure the components and interface are working as intended

System Testing

  • Is a test that verifies the software itself 
  • Also it checks the system elements 
  • If the program meets the requirements of the system 

Acceptance Testing  

  • Validates if the program meets the customer’s expectation 
  • If the customer can work correctly on a user’s device 

These different types of testing are used to prevent issues down the line. So that programmers can find potential bugs, improve quality of the program, improve user experience, testing for scalability, also to save time and money. In order to save money and time because it takes a lot of employees to solve problems with a program that could have been used to make a new product. A business wants a program to work and be improved over time.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Scrum On

Software Development Methods

Wow, look at us, huh? Almost a month later, another post appears! This month (I guess I’ve moved to monthly blogging… We’ll tighten it up, I promise) has been mostly about the different ways to go about developing something. Neat, huh?

So far, we’ve talked about waterfall, agile, and the latest, scrum!

SCRUM

Scrum builds upon each previous development methodology, and is a natural extension of Agile. It is not, however, ‘better’ than agile, but simply different. No one methodology is the best, but some perform better than other in certain circumstances. Scrum is the latest methodology that my class has been learning about, and as such I decided to take some time to further look into it.

Scrum in 20 minutes is a video I came upon while looking for examples. It explains the process of scrum, why it is used, and how it is different to other methodologies. As well as this, the video contains a few examples of how using scrum has been beneficial.

One of the reasons that I decided to watch this video was that it simply looked professional. It was well polished, and felt like a finished product. A lot of the times, when I go looking for academic supplemental material, I’m presented with a sea of the same animated characters, slideshows, and whiteboard-style videos, so this one was a VERY nice change of pace. More of these, please! (I have some words with whoever invented PowToon).

This video really helped me to see how Scrum is applied to a real life example. I also appreciated the refresher on the process as a whole, but having real-life examples of a full sprint, planning, and what each of the team members’ roles contribute was really helpful in better understanding Scrum as a whole.

Something that I realized while watching the video, was that Scrum does not have to be software development specific. I play a lot of optimization games, and something like scrum just feels extremely organized, and is something that I feel is worth applying, at least in terms of concept, to more of my life than just this one small facet. Organization and goal setting is important in almost everything we do, and Scrum is just one way to do that, but it has been refined over multiple years of trial and error.

I am excited to apply scrum to future projects, and look forward to the increased organization that a solid planning methodology will bring to the table.

This concludes my mandatory blog post of Quarter 2 for the semester.

— Will Crosby

From the blog CS@Worcester – ELITE Computer Science by William Crosby and used with permission of the author. All other rights reserved by the author.

Understanding RESTful API design principles – Upsun Blog

This quarter I decided to read “Understanding RESTful API design principles” by the Upsun blog, which talks about many aspects of RESTful API. I picked this because it directly relates to what we’ve been learning recently in class, and relates to a system I will have to work with next semester for my Capstone. I figured it would be a good idea to get more familiar with REST API early.

The article starts off explaining what REST (REpresentational State Transfer) actually is. It’s a “set of architectural principles that define how to design and implement these interfaces”, discussing the HTTP methods involved. Being the only API design principle I know of, I had no frame of reference for its advantages compared to other options, but the article provides a chart and compares RESTful to GraphQL (good for more efficient and specific data fetching) and gRPC (Good for microservices and streaming).

The RESTful architecture typically contains a Client, Server, and Database, but it can include additional elements such as a load balancer or a cache. The Client, Server, and Database are fairly straightforward, being the consumer, the server hosting and processing the requests, and the database to store the data. The load balancer is also intuitive to it’s name, but for the smaller scale system like the Food Pantry which is unlikely to experience a large server load at any time anyway, it’s probably unnecessary.

The article goes into 6 design principles of REST APIs. The first one, Client-server architecture, states that the client and server should operate independently to make long-term maintainability better. The second is Stateless communication, stating that each request from the client should contain all the information needed to process the request. Nothing is stored on the server, so that each request is self-contained and can be handled independently, simplifying the process. The third is Cacheable Responses. If an API response is cacheable, it should include caching headers such as “Cache Control” or “Expires”, so that the data can actually be cached and the server doesn’t need to be queried every time. The fourth principle is Layered System. In many ways this seems similar to the principle of Abstraction, where the API should operate across multiple layers without the client being aware of the layers, reducing complexity and making it so the client doesn’t have to interact directly with multiple backend servers. The fifth is Uniform Interface, stating that the way resources are interacted with (ie. HTTP Methods, URL patterns, etc.) should be standardized and uniform. The last one is Code on Demand, which the article states as optional. It states that servers can use executable code such as Javascript to extend functionality.

Many of these principles were fairly intuitive, but it’s good to have them explicitly stated anyway. Some were also directly connected to our class discussions. Things like caching and the executable code stuff are things we didn’t talk about, but I’ll keep them in mind as we do more with REST API.

From the blog CS@Worcester – Site Title by Justin Lam and used with permission of the author. All other rights reserved by the author.