Category Archives: CS-343

Express.js and it’s association with REST API

For this third quarter’s professional development, i read the Treblle post, “How to Structure an Express.js REST API with Best Practices” (https://treblle.com/blog/egergr). The article mostly just talks about on how to properly organize and build RESTful APIs using Express.js while maintaining clean, scalable, and modular code. It covers some key principles like separating app and server logic, implementing a layered architecture, structuring routes and controllers clearly, and designing the API in a way that’s easy to maintain and expand. It also makes mention on the importance of using Docker for containerization and environment consistency, which is essential for deploying APIs reliably across different systems.

I picked this specific resource because i think it ties closely with what we’ve learned in class earlier, taking into account the things like modularity, separation of concerns, and maintainable software design. It helps that we were doing class exercises when it comes to building small web services and understanding how system components interact(at least tangentially when it comes to the guestinfobackend stuff), so reading about how Express.js projects are structured in real-world settings gave a lot of context for people such as myself interested in backend or full-stack development, it’s the kind of practical foundation that helps turn classroom principles into actual coding habits.

From the article, I learned that Express.js isn’t just about setting up endpoints quickly; it’s about creating a clear, layered structure where each part of the API has its own responsibility. In one instance, it recommended dividing the project into three layers: a web layer (for routes and middleware), a service layer (for handling logic and validation), for the other a data layer (for managing database access). I’d say this structure kind of keeps your code modular and easier to debug. Another thing I think is useful was the reminder to containerize the API using docker, which helps standardize development and production environments so you can avoid those “it does/doesn’t work on my machine” problems.

I’d say the article reinforces many of the software architecture concepts we’ve referenced in class, such as modularity, abstraction, and loose coupling. A modular API design definitely makes it easier to scale, test, and maintain which at the end of the day is really the heart of software construction. It also reminded me that tools like Docker play a key role in supporting architecture by making deployment consistent and repeatable, which is just as important as how the code itself is structured.

As a whole, i’d say this article helped me better understand what good backend architecture looks like in practice. It gave me a clearer sense of how to build modular, scalable APIs using Express.js and Docker and i can somewhat see how those principles might carry over into any kind of future coursework and professional projects that i might be part of.

From the blog CS@Worcester – CSTips by Jamaal Gedeon and used with permission of the author. All other rights reserved by the author.

REST API Parameters and Filters

This past week in class, we were working on a homework assignment on REST APIs. In the first part of the homework, we had to create new endpoints for the inventory path. The part I struggled with was writing the query parameters. I was pretty confused and felt like I was going in headfirst to something I didn’t understand. I found a site that explains the API’s parameter syntax to help. 

For path parameters, the name of the parameter is the same as the one in the path.

For query parameters, the name is not in the path and can be anything. 

The body of the parameter is the exact same. It needs a name, a declaration of if it’s a path or query parameter, if it’s required, a description, and the format of the input. 

After reading through the site, I realized I was over-complicating it, and all I had to do was use the same format as the already created parameter bodies and alter it to what I needed. 

————————————————————————

The next part of the homework assignment was to use GET methods to filter for results. I did not end up completing this part of the assignment, but I was still curious on how it worked. I found a site that explains all the ways you can filter for results, like having an attribute be equal, less than, or greater than a value. 

To filter for an attribute with a specific value, use this line: 

GET /path-name?attribute=value 

You can link filters with an &:

GET /path-name-1?attribute-1=value-1&attribute-2=value-2

Less than, less than or equal to, greater than, and greater than or equal to is achieved by the shorthand lt, lte, gt, and gte, respectively. Greater than would be shown like this:

GET /path-name?attribute_gt=number-value

The homework asks us to filter for guest age in the right path, using equal to, less than, less than or equal to, greater than, and greater than or equal to. To solve this, I would use the GET method with the guests path and the appropriate ending, like:

GET /guests?age=40
GET /guests?age_gt=40

————————————————————————

Understanding the format and syntax of REST APIs will be very useful for the Software Development Capstone next semester. I understand parameters, how to create schemas, how to reference the schemas and error codes, which are all extremely useful for future projects and in a job setting. As we continue to learn how to use REST APIs and expand our knowledge, I feel comfortable adding REST API design and implementation into my skillset.

From the blog CS@Worcester – ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

OOP: Encapsulation

I wrote my blog based on “What is encapsulation in object-oriented programming (OOP)?” from Cincom Systems Blog. The blog defines encapsulation as bundling data, specifically attributes, and the methods that operate on that data into a single unit, while restricting access to some of the class’s components. The author explains well what we learned in previous classes, that using access modifiers like private and protected, a class can hide their internal state while exposing only controlled interfaces to the outside world. It goes on to describe benefits of encapsulation. Improved modularity, protected internal state, easier debugging and testing, and cleaner interfaces. The article also gives good practices based on the idea of encapsulation.

This blog directly relates to many things that we do in this class, practicing OOP and clean coding. I picked this article because it not only defines the concept of encapsulation but also connects it with practical ideas in software design and maintenance. I felt this article would help remind me on the theory of encapsulation and OOP.

Reading this article reminded me that encapsulation isn’t simply making variables private and using setters and getters but it’s about letting the class control its own state and hide implementation details from the outside world. This blog made me review some of my past projects where I had too many exposed public fields that made the code less clean. My past projects are definitely not written well with encapsulation in mind so reviewing this topic was extremely helpful.

Going forward, in personal coding and team projects, I expect to apply encapsulation and OOP in general by designing classes so that their internal data is private or protected, and only the necessary operations are public. Ensuring that external code interacts with objects via meaningful methods rather than manipulating internals directly.

To summarize this blog. It helped me remember what encapsulation actually means as a core OOP design principle rather than just an after thought. Applying what I learned from this blog, I expect to improve upon writing code that is cleaner and less error prone.

https://www.cincom.com/blog/smalltalk/encapsulation-in-object-oriented-programming/

From the blog CS@Worcester – Coding with Tai by Tai Nguyen and used with permission of the author. All other rights reserved by the author.

Learning More About REST API

I have been learning about REST API. There is UUID. It stands for Universally Unique Identifier. It is a combination of numbers that represent a user, keeping them anonymous. REST APIs adhere to the constraints of their RESTful architecture. They contain a base URL, a media type and standard HTTP methods like GET, PUT, PATCH, POST, and DELETE. POST creates new entries in a collection. PATCH updates a certain member in a collection. Think of sewing a patch in a part of a blanket. You can’t sew a patch over a whole blanket. GET retrieves (or gets) information of a collection or specific member. POST creates a new entry like making a post to add to the sea of posts online. PUT can replace a collection, replace a specific member or change an aspect about a member. There are also codes associated with the outputs of these commands. There is 200 which means that a command was executed successfully. There is 201 which means a POST has successfully created an entry. There are unsuccessful requests like 400 which means that the request itself has an error, 500 which means that the server could not do the request and 404 means that it cannot do what is requested. That is the bulk of what I know.

The main reason for the post is to learn more about REST APIs. I am using this website as my source: Debugging APIs Best Practices for Product Managers

This article seems to have more than the basics. Most of the other articles just go over what I know. This article goes over how to debug APIs. Debugging in general is a very important skill for any coding language. In fact, most of the time a developer spends on a project is debugging.

The main thing I learned were the steps to debug API. I am going to connect this with REST APIs since it is the API I am familiar with. Step one is to identify the issue. This can be done using developer tools like Chrome Developer Tools. I believe that this is for more complex work since someone could just review the REST API. The second step is just to check the status error code. The third step is just delving in further depending on the status error code. For example, I get a 400 error. I could ask myself am I misspelled something or if my input is not properly structured. Think about step two as gathering information and step three is looking through it. The last step is experimentation. Essentially, use problem solving skills.

Overall, this article gave me a better understanding of REST APIs. I knew a few of the steps beforehand so it was not completely new. It was nice to learn that there are tools to identify problematic API. I will keep it in mind if I ever need to use it. Maybe, I will use it in a future web application development.

From the blog My Journey through Comp Sci by Joanna Presume and used with permission of the author. All other rights reserved by the author.

Building Secure Web Applications

Title: Building Secure Web Applications

Blog Entry:

This week, I developed the issue of web application security- a growing serious field in the software development. With the growing interconnectedness of applications and the increasingly data-driven nature of the application development process, the importance of user information and system integrity is equal to the one of the functionality or performance. The subject is related to the course goals related to the design of systems, software quality, and secure coding practices.

During my research, I paid attention to the general weaknesses that programmers have to deal with, including cross-site scripting (XSS), SQL, and insecure authentication systems. Such weaknesses are usually brought about by a failure to look into security requirements at the initial design phase. As an illustration, the inability to check input correctly may enable attackers to inject bad codes or access classified information. Security by design is based on the idea that protection must be implemented at each stage of development instead of viewing security as an a posteriori.

I also reviewed the industry best practice of enhancing application security. The common attacks are prevented with the help of techniques like the parameterized queries, the enforcement of the HTTPS protocol and encryption of the sensitive data and the use of the secure authentication frameworks. Periodical code inspection, automated testing, and standard compliance, such as the Top Ten guide by the OWASP, make code developers responsible to the creation of more robust systems. I was also informed that a healthy security culture in a development team, wherein the whole team takes the responsibility of securing the data of its users, is as valuable as any technical measures.

This subject matter was echoed in our discussions in the classroom on software reliability and maintainability. Secure code is just like clean code in that the code will be used over a long period. I was intrigued by the fact that the same principles of design made it more secure such as the principles of clarity, simplicity, and modularity. A well-organized system, which is simple to audit, has fewer chances of concealing undetectable weaknesses.

Reflection:

This study has made me understand that the need to develop applications that are secure is not just a technical one, but also a moral obligation. The developers should be able to consider the risks and the safety of users in advance. Security should not be at the expense of usability but rather it should complement usability to produce software that the user can trust. This attitude has motivated me to follow safe coding practices early in my work which includes validating inputs, data handling and sound frameworks.

In general, this discovery broadened my perspective on contemporary software design to include aspects of performance and functionality. Security is a key component of quality software engineering like never before. With these principles combined, I am more confident that I will be able to create applications that are efficient and scalable, besides being user-safe in the ever-digitized world.

Next Steps:

Next time, I will test some security orientated tools in the form of penetration testing systems and auto vulnerability scanners. I will also consider reading more on OWASP guidelines as a way of enhancing my knowledge on emerging threats and mitigation controls.

From the blog CS@Worcester – Site Title by Yousef Hassan and used with permission of the author. All other rights reserved by the author.

Continuous Integration and Its Role in Modern Software Development

As part of my work in CS-343: Software Construction, Design, and Architecture, I explored Martin Fowler’s article Continuous Integration: Improving Software Quality and Reducing Risk. The article explains how Continuous Integration (CI) has become a foundation of modern software development. I chose this topic because CI connects directly to what we learn in class about software design, testing, and teamwork, and I wanted to understand how professional engineers use it in real projects.

Why Continuous Integration Matters

Fowler defines CI as the practice of frequently merging small code changes into a shared repository, where each integration automatically triggers a build and a full suite of tests. This quick feedback loop helps developers detect and fix errors early, saving time and reducing costly bugs later. I found it interesting how CI transforms collaboration. Instead of waiting until the end of a sprint to integrate work, developers share updates several times a day, which keeps everyone aligned and encourages constant communication. This matches Agile values like transparency and adaptability.

How It Works in Real Projects

The article reminded me of how many real companies rely on CI tools such as GitHub Actions, Jenkins, or Travis CI. For example, open-source projects on GitHub often use automated pipelines that run tests every time someone submits a pull request. At larger companies like Netflix or Google, CI systems help maintain quality across thousands of code changes each day. These examples show that CI is not just a technical setup, it is a habit that promotes trust and shared responsibility among team members.

Challenges and Lessons Learned

Implementing CI is not always easy. Some teams struggle with broken builds, unreliable tests, or slow pipelines. Fowler points out that success depends on discipline and teamwork. Everyone must commit stable code, fix issues immediately, and write reliable automated tests. I learned that good communication is just as important as good tooling. Teams that treat CI as a shared value rather than a rule tend to build stronger collaboration and avoid finger-pointing when problems arise. Helpful resources such as the GitHub Actions Docs and Jenkins Website provide guidance for managing these challenges.

Key Takeaways

Continuous Integration stood out to me as more than just a process. It represents a mindset of accountability and openness. By integrating code regularly, teams reduce risks, deliver features faster, and maintain cleaner codebases. What I liked most about Fowler’s explanation was how he linked technical practices to human behavior, showing that consistency and trust are the real keys to quality software. Moving forward, I plan to apply these principles in my own projects by setting up automated builds and test workflows. CI will help me work more efficiently and confidently in any team environment.

From the blog CS@Worcester – Life of Chris by Christian Oboh and used with permission of the author. All other rights reserved by the author.

Software Frameworks and REST APIs: Building Scalable, Maintainable Systems

Hello everyone, and welcome to my blog entry for this quarter.

For this quarter’s self-directed professional development, I selected the article “What frameworks are commonly used by REST API developers?” by MoldStud (moldstud.com) which surveys popular frameworks for building REST APIs and outlines why they matter.
Because in our classes we’ve been learning about software architecture, design patterns, and object-oriented design, I wanted to explore how frameworks help bring those concepts into real projects, especially when implementing REST APIs.

Summary of the Article

The article begins by explaining that when developers build REST APIs, choosing the right framework is critical. It reviews several top frameworks, such as:

  • Express.js (for Node.js): praised for its simplicity, flexibility, and modular middleware system.
  • Spring Boot (Java) : known for its strong ecosystem (Spring Data, Spring Security, etc.) and ability to rapidly build production-ready REST APIs
  • Frameworks in Python such as Fast API and Flask which also permit building RESTful services with fewer boilerplate lines and good developer productivity

The article emphasizes that frameworks provide built-in features like routing, serialization, input validation, authentication/authorization, and documentation support, which means developers can focus more on business logic rather than boilerplate.
It also notes that frameworks differ in trade-offs (simplicity vs. features, performance vs. flexibility) so choosing depends on project size, team skill, performance expectations, and ecosystem.

Why I Selected This Resource

I chose this article because, in our coursework and my professional development (including my internship at The Hanover Insurance Group), I have seen frameworks play a key role in making software more maintainable and scalable. Given that we have covered design principles and object-oriented design, understanding how frameworks support those principles (and how REST APIs fit into that) felt like a natural extension of our learning. I wanted a resource that bridges theory (design, architecture) with practice (framework usage, API development), and this article did just that.

Personal Reflections: What I Learned and Connections to Class

Several thoughts stood out for me:

  • Frameworks help enforce design discipline. For example, while in class we’ve talked about abstraction, encapsulation, and modular design, using a framework like Spring Boot means that the structure (controllers, services, repositories) often mirrors those concepts. The separation of concerns is built in.
  • When building a REST API, using a framework means you benefit from standard patterns (e.g., routing endpoints, serializing objects, handling errors) so you can spend more time thinking about how your code relates to design principles, not reinventing infrastructure.
  • I’ve seen in projects (including my internship) how choosing a framework that aligns with the team’s language, domain, and architecture reduces friction. For instance, if you need to scale to many services, choose a framework that supports microservices or lightweight deployments. The article’s discussion about trade-offs reminded me of that.
  • One connection to our class: We’ve drawn UML diagrams to model systems, show how classes relate, and plan modules. Framework usage is like the next step: once the design is set, frameworks implement those modules, enforce contracts, and provide the infrastructure. In particular, when those modules expose REST APIs, the design decisions we make (interface boundaries, class responsibilities) reflect directly in how endpoints are designed.
  • It also made me reflect on how REST APIs themselves are more than just endpoints, they represent system architecture, and frameworks help in realizing that architecture. For example, using a framework that supports versioning, middleware, and layered architecture helps make the API maintainable as it evolves.

Application to Future Practice

Going forward, I plan to apply these lessons in both academic and professional work:

  • When building a project (in class or internship) that uses REST APIs, I’ll choose a framework early in the design phase and consider how the framework’s structure maps to my design model (classes, modules, responsibilities).
  • I’ll evaluate trade-offs consciously: If I need speed and simplicity, maybe a lightweight framework; if I need enterprise features (security, data access, microservices), maybe a full-featured one like Spring Boot.
  • I’ll use the framework’s features (routing, validation, middleware) to enforce design principles like modularity, readability, and maintainability rather than writing everything by hand.
  • From the API perspective, I’ll ensure that endpoint design aligns with our design models: models reflecting resources, controllers respecting single responsibility, services encapsulating business logic, all supported by the framework.
  • Finally, I’ll treat the framework as part of the architecture, not just a tool, meaning I’ll reflect on how framework conventions influence design decisions, and how my design decisions influence framework usage.

Citation / Link
Crudu, Vasile & MoldStud Research Team. “What frameworks are commonly used by REST API developers?” MoldStud, October 30 2024. Available online: https://moldstud.com/articles/p-what-frameworks-are-commonly-used-by-rest-api-developers

From the blog Rick’s Software Journal by RickDjouwe1 and used with permission of the author. All other rights reserved by the author.

Software Frameworks: an Introduction

For this post I listened to the Sourcetoad podcast called, Leveraging Frameworks for Your Software Development Project. This podcast features three software developers who work for Sourcetoad, a software consulting and development firm, by which they discuss software frameworks. https://www.youtube.com/watch?v=ik4d2Jf7Rik&t=1539s

To begin with frameworks we must define what exactly is a framework. A framework is collections of pre-built code that builds the blueprint of the app; furthermore, without need to write the code yourself. Frameworks are tremendously useful because with any coding application, you are trying to help solve a problem. To initiate any project, there are these kinds of “price of entry” such as login, authentication, security, database, and server to name a few. With these examples, the framework gives you these things right off the bat, pre-built modules, which allows you to start faster on the solution.

Now the best part about frameworks is that the vast majority are free. Free being that the code is open source; software made by the software community for the software community. Anyone can view, edit, and modify the software. 

Now what would be an instance you would not want to utilize a framework. Say for instance you have a simple application that utilizes a framework, but while you run the code you notice it is rendering slower than expected. This is because with any kind of framework you are getting a lot of pre-built code, which you might not utilize which will slow down the rendering time. Wonderfully put in the podcast, “the great thing about a framework you get a lot of stuff, but you also get a lot of stuff.”  

With frameworks you can build on top of them and one popular method of doing this is by using a CMS: Content Management System (frontend and backend). A CMS enables users to manage the content on a website themselves, without needing to know how to code; gives non technical people the ability to make changes on the website instantly. A con of this is that it is vendor locked in, meaning it cannot transition easily. 

There is also a headless CMS. This is responsible for editing and storing of the content, but is not responsible for how the content is presented visually; it has no frontend, only backend. Some pros of a headless CMS is that it’s an easier content manager, gives developers more freedom to develop code at scale and also, content can be created once and published everywhere.

Overall, I’ve heard the word “framework” get tossed around in the computer science world, but never truly did have a grasp on what it really was. From listening to this podcast, I feel great about what it is and eager to start a project using a framework and even more so exploring the world of CMS and headless CMS, once I feel comfortable with frameworks.

From the blog CS@Worcester – Programming with Santiago by Santiago Donadio and used with permission of the author. All other rights reserved by the author.

A look at Refactoring

 Hello! For my second quarter blog, I read a separate blog written by Yung Han Jeong, titled “Spaghetti Deconstructed: Lessons from my first refactoring“. As its name suggests, this blog talks about Yung’s personal experiences and advice pertaining to refactoring. For those who don’t know, refactoring is essentially improving existing code in a way that doesn’t affect it’s functionality. This can be as simple as changing variable names, all the way to completely restructuring the program. In our class this semester, for a very large portion of what we will be doing, refactoring is an integral part of it. I would say at this point I am pretty comfortable with the topic, however I figured that I would like some sort of anecdotal, first-hand account of someone’s actual experiences with it, as everything we have been doing has been in a classroom setting. 

Yung’s blog recounts her experiences in refactoring some of her earliest code written when she was an entry-level developer, namely in her horror at how bad it used to be. It got her thinking about what she could have done to improve her code, which inspired her to blog about the biggest changes she thinks would make the difference (she provides four examples which she calls “pasta”, “sauce”, “meatballs”, and “cheese”, I don’t think I need to explain that). Firstly (pasta), she talks about the importance of having descriptive variable names. She argues that while it is enticing to have simple variable names that you might not see the need to go into detail about as you are familiar with the code, it is always worth the extra effort to either make them more descriptive, or to comment an explanation about all of them (or both!). Next (sauce), she hammers in the importance of commenting out the entirety of your code. It’s something all cs students have been pestered about endlessly, but it is one of the single most important things you can do to improve your code, being able to quickly understand what a method/class/etc. does saves so much time in the long run, outweighing the extra time you spend writing the comment. Her third point (meatballs) ties into this in that she recommends keeping most if not all debugging statements. She argues that once they served their purpose, they can simply be commented out and referenced in the future. Lastly (cheese), she emphasizes the importance of revisiting code “soon and often”. 

Admittedly, the advice Yung gives is pretty rudimentary. When I found this blog I thought it would talk about refactoring in the way we have in class, where we focus more on the structure side of things. However, reading this made me realize that this is very much refactoring as well. Sometimes the best thing you can do with your code is improve on the simple things, like naming schemes and comments, something Yung, an actual software dev, seems to find important enough to write a blog about. I am happy I found this blog; while I didn’t exactly learn anything ground-breaking, I realized that when refactoring, sometimes improving on the simple things is the best course of action to take. 

From the blog Joshua's Blog by Joshua D. and used with permission of the author. All other rights reserved by the author.

A look at Refactoring

 Hello! For my second quarter blog, I read a separate blog written by Yung Han Jeong, titled “Spaghetti Deconstructed: Lessons from my first refactoring“. As its name suggests, this blog talks about Yung’s personal experiences and advice pertaining to refactoring. For those who don’t know, refactoring is essentially improving existing code in a way that doesn’t affect it’s functionality. This can be as simple as changing variable names, all the way to completely restructuring the program. In our class this semester, for a very large portion of what we will be doing, refactoring is an integral part of it. I would say at this point I am pretty comfortable with the topic, however I figured that I would like some sort of anecdotal, first-hand account of someone’s actual experiences with it, as everything we have been doing has been in a classroom setting. 

Yung’s blog recounts her experiences in refactoring some of her earliest code written when she was an entry-level developer, namely in her horror at how bad it used to be. It got her thinking about what she could have done to improve her code, which inspired her to blog about the biggest changes she thinks would make the difference (she provides four examples which she calls “pasta”, “sauce”, “meatballs”, and “cheese”, I don’t think I need to explain that). Firstly (pasta), she talks about the importance of having descriptive variable names. She argues that while it is enticing to have simple variable names that you might not see the need to go into detail about as you are familiar with the code, it is always worth the extra effort to either make them more descriptive, or to comment an explanation about all of them (or both!). Next (sauce), she hammers in the importance of commenting out the entirety of your code. It’s something all cs students have been pestered about endlessly, but it is one of the single most important things you can do to improve your code, being able to quickly understand what a method/class/etc. does saves so much time in the long run, outweighing the extra time you spend writing the comment. Her third point (meatballs) ties into this in that she recommends keeping most if not all debugging statements. She argues that once they served their purpose, they can simply be commented out and referenced in the future. Lastly (cheese), she emphasizes the importance of revisiting code “soon and often”. 

Admittedly, the advice Yung gives is pretty rudimentary. When I found this blog I thought it would talk about refactoring in the way we have in class, where we focus more on the structure side of things. However, reading this made me realize that this is very much refactoring as well. Sometimes the best thing you can do with your code is improve on the simple things, like naming schemes and comments, something Yung, an actual software dev, seems to find important enough to write a blog about. I am happy I found this blog; while I didn’t exactly learn anything ground-breaking, I realized that when refactoring, sometimes improving on the simple things is the best course of action to take. 

From the blog Joshua's Blog by Joshua D. and used with permission of the author. All other rights reserved by the author.