CS343: The Lost Art of Refactoring

Papa Bear, is it true the humans put their noodles in one bowl, their vegetables in another, and the broth in a third, unrelated one?

This week I read a blog article from Martin Fowler’s blog titled “Refactoring: This Class is Too Large” by Clare Sudbury. Find that post here.

What was the blog about?

Sudbury walks us through how she would refactor a poorly-written (real-life example) codebase and the common mistakes people make when developing code that ultimately precipitate rounds of refactoring.

Why do we refactor?

Our code evolves. At first, as Sudbury puts it, what most people have are big “entrance halls” to their problems; big main methods and such where a jumble of not-necessarily related but nevertheless heavily coupled code sits together for the sake of convenience and because we have, at that point, only a shaky or infirm grasp of what architecture our code will have in the future. That’s fine–for a start, and for a time.

Problems arise when we keep trying to entertain in the “entrance hall”. We need to refactor in order for our structural conventions to even continue to make sense.

Why did I choose this article?

I need to be better and more strategic more about refactoring, and having a ton of visual references (the examples) paired with the reinforcements of best practices helps tremendously.

There are also other considerations we haven’t talked about in-class that Sudbury talks about in more detail, such as how to structure our refactors within a series of commits that make logical sense for anyone looking from the outside in, so her blog is doubly worth reading for those extra touches alone.

What are the steps to refactoring?

For the most part, these steps are well-founded in the context of our course and should be pretty easily understood in that sense. In short, though, we can think of refactoring out a method from a parent class as having six distinct steps.

  1. Organize methods and related code into distinct regions–more of a logistical than an architectural point, but keeps with what we’ve learned in this course. Code that changes together stays (and collapses) together.
  2. Verify dependencies and class relationships of the method to be refactored using diagrams or similar tools–again, tracks with what we’ve learned. This is exactly the use case of UML class and sequence diagrams.
  3. Clean-up methods that stay in the “entrance hall”–we’ll keep some parts of our method in our main class, but with appropriate changes, since methods that they might have invoked may now be sitting elsewhere.
  4. Create a new class to contain the refactored method and test it in the simplest terms possible (tiny steps principle).
  5. Build more test coverage for the new class (TDD).
  6. Move method(s) to the new class.

We repeat for as many methods with unique behaviors as there are in the code.

How does this relate to the course material?

We learned that, in the course of developing a problem, we might have code structures that become obsolesced or which are consolidated into others, i.e. composition. And we’ve done refactoring a little, as with the DuckSimulator code from one of our homeworks. But what we haven’t looked at is how to actually systemize this process of refactoring so that it becomes second-nature to us and the steps taken become intuitive rather than a feat of mental gymnastics. If we can’t conceptualize the process of refactoring as an organic evolution of our codebase, we are doomed to stay in cycles of bad code and refactoring, bad code and refactoring, etc. For my own sake and for that of my professional career, I’d better learn to refactor more.

It’s not just about making unit tests.

Kevin N.

From the blog CS@Worcester – Kevin D. Nguyen by Kevin Nguyen and used with permission of the author. All other rights reserved by the author.

Open Source Licencing

Throughout my Computer Science degree, I have contributed to and collaborated on projects that have been posted to Github or GitLab. I have also utilized, downloaded, and sometimes even shared school material that was a Free and Open Source Software (FOSS). In the past, I have never given much thought to my right to the material I accessed, how it was legally allowed to use it, and what practices needed to be in place to protect my own works that I have posted on various version control management sites. In this class we are exploring licenses and copyrights when it comes to any project code that  an individual produces, and the legalities behind the use, alteration and distribution of said works. 

In this video Open Source Licence Types by creator Pro Tech Show dives into open source licenses specifically. This specific area of copyright law when it comes to code is important for me to understand because as someone who will constantly use these sites to host my projects, and as someone who plans to contribute to or create certain HFOSS projects, I need to have a good grasp on how I can go about using others works and sharing my own code. 

This video simplifies the over 100 open source licenses by grouping them into five broad categories based on how they affect the user and the copyright owner. These categories are Public domain, permissive licence, weak copyleft, strong copy left and stronger copy left. The most interesting part of this video was its explanation of the automatic copyright of All rights reserved and how public domain waives all of those rights. I thought this was interesting because many people may be ignorant to the automatic copyright placed on their code that has been posted on github. One may have done so with the intention for it to be shared and collaborated on. There is, however, would require the use of public domain where it would act as an absence of a licence and may be more along the lines of what the author had intended.  It is not only important to know how licensure works as the author, but also as the user. Again, one may assume that they have a Public Domain to access, download, and mutate to code found on git hub, however, I should be taking more care to examine the specific licensing under each project. This would help alleviate any legal issues down the road while also getting me more familiar with the different types of licensing and why types of projects require which licence.

From the blog Anna The Dev by Adrianna Frazier and used with permission of the author. All other rights reserved by the author.

Writing Cleaner Code: Breaking Out of the Student Mindset

https://www.geeksforgeeks.org/blogs/tips-to-write-clean-and-better-code/

Most of our time during our college career and learning how to create working and usable code, there was not really a strong emphasis on how to write “clean code”. Sure, best practices, industry/language standards, and formatting was explained, however, there is another aspect of code legibility and readability that is important to understand for everyone writing code. Our class this semester explores a more indepth view of what industry standards are to be followed and even helps us unlearn some basics (like comments) that we were utilizing in our code.

This article from GeeksforGeeks outlines seven key tips for writing clean, maintainable, and efficient code. It emphasizes that writing good software goes beyond just making it work; the code must be easy to read, understand, and change, as developers spend significantly more time reading code than writing it. The article indicates that adhering to certain practices leads to reliable, scalable software that is easier to debug and maintain, ultimately creating better collaboration among developers 

Some of these practices/principals are ones that we, as students, have already learned to adopt such as using meaningful names for methods and variables, as well as learning how to organize our projects, specifically when it comes to object oriented programming. A new take away that I will be more aware of is how descriptive the names are. I used to think that overly long variable names were “bad practice”, however, for the sake of readability and general understanding of the code, longer, more descriptive variable names may be indicated. 

When it comes to practices that were a new concept for me, the utilization of comments in code was an important one to unlearn. The article notes that code should be self-explanatory through clear syntax and naming. Comments should only be used when absolutely necessary, rather than stating the obvious. Another aspect of clean coding that helped me alter the way I will continue to code is the inclusion explaining how methods should only be used for a single purpose. This method, otherwise called the Single Responsibility Principal, notes that functions and classes should only do one thing and do it well. They should be small and focused, avoiding nested structures or too many arguments  

This article was important because it bridges the gap between “student code” (written for a class assignmentt) and “professional code” (written to be read and maintained by a team). Understanding how to write clean code will help immensely make all of your projects look more professional, as well as help you with technical interviews. Adopting these habits signals to employers that you aren’t just a coder who is able to put together a small school project, but a software engineer who builds sustainable, high-quality products

From the blog Anna The Dev by Adrianna Frazier and used with permission of the author. All other rights reserved by the author.

Week 3-CS 343 blog: REST API DEVELOPMENTS AND BEST PRACTICES

Rest API is something that really interesting and fun to work with. It enables communication between different software systems over the internet, typically using HTTP protocol. However, Rest API can be difficult sometimes due to its complex queries and filtering, and also batch operations and side effects, etc…Good thing is I went through this blog written by “Medium”. They explained some of the good tips for us to practice with REST API calls. I will walk through their ideas and plans to help us be better at REST API.

Here is their blog https://medium.com/epignosis-engineering/rest-api-development-tips-and-best-practices-part-1-9cbd4b924285

  1. Planning
    • Do research first: Study existing REST API designs, standards, and popular APIs. Consider whether REST is the right paradigm, but also explore alternatives like GraphQL.
    • Look at other APIs: Try working with well-known APIs (GitHub, Stripe, Twitter, Paypal) to understand what work and what doesn’t

2. Foundations Matter

  • A solid early foundation avoids costly refactors later.
  • Assume the API will grow: design for scale, future endpoints, versioning, pagination, analytics, etc.

3. Specification

  • Write an API spec before coding
  • Use tools like OpenAPI/Swagger for designing your API contract
  • Specification pays off – especially for APs that are not just internal

4. Testing

  • Critical for APIs: because they connect server data with clients, they need to be very reliable
  • Don’t rely solely on manual testing – build an automated test suite
  • Focus on functional (black-box) tests, not just unit tests
  • Use a test database that can be reset; include regression tests for past bugs

5. Deploymemt

  • Decouple your API from other server apps: keep the API as a separate module.
  • Why? So updating or deploying one part doesn’t risk breaking everything else.
  • Independent deployments make development and operation safer and simpler.

6. Other Good Practices

  • Be consistent in resource naming: choose either singular or plural for your endpoints (/car vs /cars), but don’t mix.
  • For PUT or PATCH requests, return the updated resource in the response so clients know its new state.
  • Avoid using multiple forms of authentication or session mechanisms: for example, don’t mix custom tokens with default PHP session cookies (PHPSESSID) — it leads to confusion.
  • Don’t leak internal errors (e.g., SQL errors) to API consumers. Log the details internally, but return a generic 500 error externally for security reasons.

Why This Matters

  • The article is very practical: instead of rehashing REST theory, it focuses on avoiding pitfalls the author has personally encountered.
  • By planning, specifying, versioning properly, and testing early, you build a more stable and maintainable API.
  • A thoughtful deprecation strategy and good error-handling also improve reliability and developer experience for your API clients.

From the blog CS@Worcester – Nguyen Technique by Nguyen Vuong and used with permission of the author. All other rights reserved by the author.

Backend Development for a future Frontend Developer

After spending the past few weeks learning about REST API calls, microservices and backend management, there were a lot of aspects I had trouble understanding and even more topics that I wanted to dive deeper into. However, I had trouble nailing down a specific aspect of the entire backend web ecosystem and inner workings. I realized I was struggling to understand the overall architecture of a website and how each aspect connected to itself. Enter: “Backend web development – a complete overview”, a brief, high-level, but informative and easy to understand snapshot of how a website processes information, the software it requires, and how they all connect. 

In the video created by youtuber SuperSimpleDev, he provides a comprehensive introduction to the concepts behind the server-side of web applications. I was able to understand that a website’s backend is described as a request-response cycle, where a user sends a message or “request” to a server and revives a response back. This communication is managed by an API, which helps to outline the types or requests a user is allowed to make. Much like in class, the video touched upon the use of REST API as a common naming convention and a standard for APIs, where specific request types match actions like POST for creating or GET for retrieving data.

When it comes to the actual infrastructure, the video notes that modern backend development relies heavily on cloud computing, where companies rent virtual machines from providers like AWS or Google Cloud instead of owning physical servers. This can be managed manually as Infrastructure as a Service (IaaS) or through Platform as a Service (PaaS) tools that manage infrastructure automatically. As apps get bigger, developers may then design a network of microservices to split a large backend into smaller, specialized services or utilize Software as a Service (SaaS) for specific functions. 

This video was very helpful when it came to expanding on many different topics within backend web services. For technologies like cloud computing, it helped me understand the usefulness of these services and how they benefit web developers when it comes to managing many databases and servers. AWS specifically has been something that I have been considering looking into on my own self learning journey as I aim to become a web developer. While I would like my focus to be more front end development centered, I would like to be well versed in the technologies and design of all aspects of web development. This would help make me more well rounded and create better sites that appeal to both employers and potential clients. 

From the blog Anna The Dev by Adrianna Frazier and used with permission of the author. All other rights reserved by the author.

FOSS -> HFOSS

https://timreview.ca/article/399

Free and open source software (FOSS) has been a focus of our curriculum this semester. Our introduction to working with and managing the Theas Food Pantry software has provided us with experience contributing to this FOSS as well as understanding the benefits of these types of programs. The article Humanitarian Free and Open Source Software  by Chamindra de Silva dives into the important subsection of FOSS, one that focuses on humanitarian efforts and how it can benefit those in need or those suffering from a disaster event; otherwise described as “software engineers without borders”. A key aspect of this type of software – and the reason I chose to highlight this article – is that it inspires collaboration from others in the community, with the goal to make the program better, evolve it, and offer it as a tool for others to use and benefit from; therefore upholding the key principals of FOSS.

The article explores technology’s role when it comes to humanitarian efforts, it creates software to help address rapid-onset natural disasters, slow-onset natural disasters, and human instigated disasters. Many of these models support efforts to aid in managing large scale data management problems that would otherwise often overwhelm traditional information gathering methods. Humanitarian goals and values align with many in FOSS’s mission; to be available to the public to serve a common good, to rely on the support of those in a community to help others in that community, and lower cost in order to leave more funds for aid. 

However, outside of their similarities, HFOSS requires more attention to certain “best practices” due to the nature of importance within the community it is serving. It demands a product be of high quality and relatively stable because system failures can result in information loss that affects life-saving efforts. The software is required to be highly intuitive with an easy learning curve, while also being resilient to many types of political and economical and ecological environments. 

Humanitarian free and open source software has been of interest since our initial introduction to it in class. Having been aware of and utilized FOSS implementations, I was unaware of this specific movement. The idea that I can connect uniting people in our world together by helping them during times of crisis with software development and my career in tech, appeals to me greatly. Knowing that I could potentially help create software that connects people with lost loved ones (Sahana), or host maps of previously unmapped areas to help first responders (OpenStreetMaps), helps me open my mind to evergrowing opportunities afforded to me in my new career. It also gets me thinking about ways to create and tie in a HFOSS project of my own that relates to my current career in Veterinary Medicine.

From the blog Anna The Dev by Adrianna Frazier and used with permission of the author. All other rights reserved by the author.

Blog Post for Quarter 3

November 16th, 2025

Considering how clean code was recently on my mind (mostly due to a recent assignment), I thought it might be interesting to look into. I ended up finding out that there a various style guides (well, a comic from an assignment also showed me that too) and this blog shows some examples. It is nice as a basic overview of some style guides. It most notably noted 3 types of styles that revolve around whitespace usage and brace placement. It compliments what I was learning in class as the class typically focused on things like function and variable names, reducing the amount of nested loops, and the usage of too many parameters. The blog goes over how some styles place braces in a way to make it easier to parse through things like heavily nested loops and such. These both end up telling of ways to increase clarity but in different ways, which is super fascinating.

Interestingly, this led to me to look at my own code. My current code uses the “Allman” style of braces. This was taught to me back in AP Computer Science back in high school. I vaguely remember “Allman” being discussed in class, but I never made the connection between it and how I typed code. I ended up doing it since it was the easiest to read. Well, that and the professor told me that this was the “proper” way to code. (I vaguely remember the thing I did code in also automatically did it for me too.) Since then, I’d sometimes just go around and putting braces in that format. (Perhaps I’m worse off due to this since I overly rely on a style. I’ll need to learn to read other styles soon.)

Overall, I’ve mainly noticed my code is super, super bad. Like, overly bad. So next time I code something, I should do many different things. This blog I picked and the assignments on “clean code” would be a good start honestly. (Unfortunately, as someone who “thinks by doing,” I had a habit of creating copious amounts of technical debt without even realizing it. Then again, I was always the type to prefer redoing things entirely as opposed to looking for minor improvements….) Maybe one day my code will be actually good for once. And thankfully, I’ve got various places to start improving. There’s a long road ahead…. Well, at least that means that I have a lot of ways I can improve; the plateau isn’t here just yet. In fact, it is very far from where I am.

https://iannis.io/blog/the-ultimate-code-style/

From the blog CS@Worcester – Ryan's Blog Maybe. by Ryan N and used with permission of the author. All other rights reserved by the author.

CS343: The Dark Days of Design (and DDD)

This week I read a blog post from Robert Laszczak of Three Dot Labs titled “Software Dark Ages”. You can find that post here.

If the Dark Ages happened a long time ago, why are people still using Eclipse?

What was the article about?

Laszczak recounts his experience working for a company heavily ladened with pretty much every single kind of organizational, institutional, architectural, and technical flaw or burden out there. In his retelling, even something as simple as changing the functionality of a button would take months because of extremely high code coupling–among myriad other problems.

The way he solved this problem was to use DDD, or domain-driven design. DDD, like the similarly named test-driven development (TDD), places foci on particular problem domains and builds up around them. In TDD, these foci would have been our tests and test coverage, which we would then use to build up our code against. In DDD, we start with the problem and come up with a solution pattern that perfectly encapsulates the problem and its who/what/why/whatever.

If this sounds deceptively intuitive, that’s because it probably is. As Laszczak points out, DDD and problem-solutions like it have existed for almost two dozen years at this point (DDD’s whitepaper dates to 2003). The reason why they’ve fallen into obscurity is because people have forgotten the enlightened ideas of antiquity like they’re living in the Dark Ages.

In [the] Software Dark Ages, infrastructure is putting down important software design techniques.

The problem is, Laszczak bemoans, that the modern software DevOps team is more concerned with achieving technical marvels (e.g. getting that Atlassian Certified t-shirt) than spending their brainpower on something as simple as considering the wheel. Technology is great, except why are we using it? What purpose does it have, not only for us, but for our customers?

The reason we engage with structural criticism is to gain a better understanding of the system at play. Look at 432 Park Ave in NYC and you might see what a poor grasp of domain causes for a project, and when engineers are aware of the customer and their problems but choose to ignore both because… it was easier?

432 Park Ave: A beautiful marble-like white concrete exterior dominates the New York City skyline…
…except for the bughole voids and the hilariously slapdash Blu-Tak patch-and-fill job. A beautiful mess.

Why does DDD matter & why did I choose this post?

We have many frameworks for tackling software problems and organizing work, but very infrequently do we stop to consider the intricacies of the problem and how we can shape our project to meet the problem’s needs (inputs) rather than our organization’s abilities (outputs). Having high productivity and a strong team culture is important, but what does that mean if our codebase is in a terrible shape and our customer’s final product isn’t right-sized to their needs?

In our class, we’ve begun to work with Thea’s Pantry, and one of the things we’ve touched upon time-after-time is that the state of the system is specifically tailored to the needs of the pantry. Not all pantries–not a lot of pantries–the code we are looking at is the way it is because someone or something at Thea’s Pantry needed it to be that way.

That’s what thinking in terms of DDD yields us.

What not thinking about DDD yields:

  • 432 Park Ave
  • The Torment Nexus (NB DevOps: not an Atlassian product, don’t try it!!!)

What I think: DDD is a mouthful.

Kevin N.

From the blog CS@Worcester – Kevin D. Nguyen by Kevin Nguyen and used with permission of the author. All other rights reserved by the author.

OpenAPI and Cloud POS

I came across an article from Oracle that talks about the benefits of a cloud POS system that uses an OpenAPI backend. It starts by highlighting the importance of having a cloud POS system in a restaurant. The biggest benefits are its responsiveness, scalability, and security. It allows businesses to add and drop menu items with ease, and prioritize orders in the kitchen. It comes with built-in analytics to help them make decisions. Oracle tells about how Simphony, their cloud POS system, has local storage to allow the system to continue working through internet outages. Next, they compare OpenAPI to private API. The main difference is that OpenAPI allows for developers to connect the software more seamlessly with less restrictions. Private API, on the other hand, is only really available to internal developers, making it less user friendly. They then talk about how Simphony is built on OpenAPI architecture, making it easier for the customer to interact with, getting data and feedback quicker. This makes for a more trusting relationship between partner and client. They make a point that it provides both scalability and security. OpenAPI is flexible and responsive, giving businesses that choose to use a POS system with open API architecture an edge. In restaurants specifically, it allows for the front of the restaurant to connect with the kitchen more easily.

The reason this was important for me to read and write about is that we are spending a lot of time in my CS343 class discussing OpenAPI. It isn’t the easiest thing to grasp, but the more we learn about it, the more interested in it I am. This was especially interesting to me because Oracle is a very big tech company, and I was curious to see how they marketed their product that uses OpenAPI architecture. As we are learning about OpenAPI systems through real-world applications, it is always interesting to see how these things are applied in a real work environment. A POS system is a very appropriate use for OpenAPI. I enjoyed learning about the different types of products that Oracle offers using OpenAPI architecture. It was also cool to see how a company like Oracle uses OpenAPI and encourages their customers to use their products with it as well. It is reassuring to see how wide-spread the use of OpenAPI is. This encourages me to learn more about it in the future, and to make sure I am well versed in this subject matter.

https://www.oracle.com/apac/food-beverage/restaurant-pos-systems/simphony-pos/why-choose-cloud-pos-system-open-api/

From the blog CS@Worcester – Auger CS by Joseph Auger and used with permission of the author. All other rights reserved by the author.

Better your Coding Standards

For this quarters blog, I decided to write about Coding Standards and chose to read Why You Need Coding Standards by David Mytton. To start off, Mytton talks about what does “proper” code really mean. It is not just any code that works, but rather code that is easy to fix, add to, and update in the future. His blog mentions a common problems that we face today. Many developers start learning code with their own style which only they might understand. Once a project grows and other people join in, different styles will collide and make the code confusing. Mytton explains the frustration of working with messy code and how stressful it is to scroll through long, unclear code with confusing variable names, missing comments, and code without spacing.

There are two main examples that were used to show the difference that coding standards make. Both examples were written in PHP, the first example was messy. There wasn’t spacing, variable names were unclear, no braces around control structures, and long conditions missing parentheses. Although it technically worked, it was definitely hard to read at first. The second example showed the same logic. This time around there was clean formatting, clear variable names, proper indentation, and braces correctly placed. I’ve never used PHP but it was obvious that the second example was much easier to understand.

I selected this blog because recently in class we’ve been working on clean coding and realized how hard it can be to understand code written in different ways. I also wanted to further my understanding of how professionals handle this problem and how they keep projects organized. I think that this blog was able to explain that point in a simple and relatable way for me.

From this blog I learned that coding standards are beyond just “good habits.” It prevents confusion, reduces mistakes, and allows projects to run more smoothly. I also learned that a coding standard is not something that just one person creates and uses for themselves but rather by the whole team. I hope that I am able to fix my poor coding habits and write code that allows others to collaborate with my work. I think before this, I didn’t care much whether somebody would be able to read my code but rather if it was able to run then I was satisfied. My mindset has changed and I want to be more consistent with following coding standards.

From the blog CS@Worcester – wdo by wdo and used with permission of the author. All other rights reserved by the author.