Category Archives: CS-343

The art of REST API design

Something I have personally never worked on as a developer is REST API. AWS is a piloting force in the sphere of web development so there was no better place to read up on this subject than there. Going through you can really see why REST API is so vital to the modern web.

An overview of REST API design principles, methods, and benefits, illustrating key concepts for web development.

What is an API?

API stands for Application Programming Interface, this defines the rules you must follow to communicate with other software systems.

API is a gateway betweek:

  • Clients: Users who want to access information from the web
  • Resources: The information that different applications provide to their clients.

What is REST?

REST stands for Representational State Transfer, this is a software architecture that imposes conditions on how an API should work. It was originally created as a guideline to manage communication on complex networks. As a result one architecture developers can use is RESTful API.

Here are some of the principles of the REST architectural style:

  • Uniform interface
    • Indicates the server transfers information in a standard format
    • 4 architectural contraints:
      • Requests should identify resources
      • Clients have enough info in the resource representation to modify or delete the resource if wanted
      • Clients receive info about how to process the representation further
      • Clients receive info about all other related resources they need to complete a task.
  • Statelessness
    • A communication method in which the server completes every client request independently of all previous requests
  • Layered System
    • Client can connect to other authorized intermediaries between the client & server, and it will still receive responses between the server
  • Cache-ability
    • Able to store some responses on the client or on an intermediary to improve response time
  • Code on demand
    • Servers can temporarily extend or customize client functionality by transferring programming code to the client.

Benefits to REST API:

Here are the few of the benefits AWS includes:

  • Scalability
  • Flexibility
  • Independence

How it works:

The basic functions are similar to browsing the internet, here are the general steps towards any REST API call:

  1. The client sends a request to the server
  2. The server authenticates the client and confirms permissions to make request
  3. The server receives request and processes it
  4. The server returns a response to the client.

The client request contains these main components:

  • Unique resource identifier
  • Method
    • GET
      • Access resources at URL
    • POST
      • Send data to server
    • PUT
      • Update existing resources on the server
    • DELETE
      • Request to remove resource
  • HTTP headers
    • Data
    • Parameters
      • Path: Specifies URL details
      • Query: Requests more info about resource
      • Cookie: Provides authentication

Final Thoughts:

My understanding of REST APIs was very limited but leave it to the best in the business to have all the information necessary to learn. I definitely think it would take time to implement nonetheless but having this basic understanding is important in the end. If you want to learn more yourself visit this page on AWS.

From the blog Petraq Mele blog posts by Petraq Mele and used with permission of the author. All other rights reserved by the author.

Software Frameworks

One day we played with “Hello world”, a couple arrays, and some text input from the user, and the next, we messed with a small terminal-based game, and ran an algorithm on some data. Up to this point, much, if not all, of the software most students have produced has been a relatively small, fairly local program, communicating with no more than a few other files within the same application, or to a tool like Docker.

When it comes to building applications with feature-sets that a business needs to operate smoothly, and ultimately make money, the scale and complexity of the applications being developed grows rapidly. Developing software to fully satisfy business requirements can be a very complicated and tedious process, and that is just the reality of engineering software.

In light of the challenges of efficiently and reliably producing software that works for a business’ use-case, developers have come up with many ways to streamline the process. One of the most popular ways to do this is to utilize a “framework”. To gain a better understanding of what a framework is in software engineering, I read through a blog post on Contentful, linked here:

https://www.contentful.com/blog/what-is-a-framework/

A programming framework is essentially a pre-built skeleton for an application. There are many different types of framework, those for the frontend, the backend, servers, mobile, and whichever it is, ultimately the goal is always the same; almost every application ever made is going to use these exact same functionalities, so now they are just bundled together for us to extend with our specific business logic.

Naturally, having a ton of parts of your application already put together and ready to be extended comes with major advantages, and unsurprisingly, a couple potential disadvantages.

The most significant upsides for frameworks come in the form of faster development, fewer accidental set-up errors, built-in security, and a baseline ground-level for applications, so all developers can know we’re working with a “Django” framework, for example. All of these benefits are massive, for both developers, and businesses.

The downside with a framework is that it’s already put together as it should be, and you probably shouldn’t change that skeleton, leaving you with less freedom to develop the application exactly as you might like. It also requires onboarding developers into that framework, as even if someone has worked with Python before, using the Django framework requires learning some more specific implementation methods and best practices related to everything Django is, not just Python.

I’ve enjoyed reading about software frameworks, as even though I’d heard of Angular, or Ruby on Rails, I didn’t fully understand what exactly was so important about these frameworks, and why so many of them were present on job postings everywhere. After reading about them, the answer couldn’t be more obvious. Professional developers will utilize these tools to improve and streamline their workflows, and that’s that!

From the blog CS@Worcester – KeepOnComputing by CoffeeLegend and used with permission of the author. All other rights reserved by the author.

Understanding REST API

Hello everyone

This week’s blog will focus on REST APIs, a topic we’ve been working with heavily in class and one that, admittedly, has left me in awe because I have realized how useful it is. Writing these blogs has actually been helping me a lot in understanding the topics we go over in class in a deeper level. Some new topics can be overwhelming at firsts but writting the blogs allows me to break down the complex concepts and understand them easier. While looking for different blogs that went over this topic, I found one that really stood out and grabbed my attention.

The author of the blog begins the blog with a clear definition of what REST API is, and this helps the reader to easies its way into the blog. I like how he stated the definition of REST API and then later one he rewords that definition in simpler terms. This allows readers who are new to REST API to not get overwhelmed and get scared by technical words. This is something that I personally value a lot and appreciate it when authors do that for us the readers. He then continues by mnentioning the six Principles of REST and follows it up by explaining each principle. For each principle he does an amazing job at explaining the core idea behind it without overwhelming the reader. He keeps it simple, not too long and follows it up with great examples making it simpler to understand each concept. I can’t explain every principle here but I will write a single sentence for each of them and what helped me the most in understanding them. Let’s first start by listing the six principles of REST API which are: Uniform Interface, Client-Server, Stateless, Cacheable, Layered System and Code on Demand. So the uniform interface constraint represents the idea that all components in a system should follow a general, consistent method of communication. The client-server principle emphasizes separating interface concerns from data storage so apps can be more portable and servers simpler. Statelessness means each request from a client must contain all the information needed to be understood—nothing is stored between requests. Cacheability allows clients to reuse certain response data to improve performance. A layered system restricts each component to interacting only with the layer directly beneath it, helping maintain structure and scalability. Finally, code-on-demand allows servers to extend a client’s functionality by sending executable code, though this one is optional as mentioned by the author but still a nice touch that he added it.

In conclusion, this blog was very informative and did an amazing job at teaching me more about REST API and helped me see the purpose of it, which is to make applications simpler, more scalable, and easier to maintain—for both clients and developers. The more you get into software programming the more you appreciate stuff like this and now I can even understand them which is the best part of learning!

Source: https://restfulapi.net/

From the blog Elio's Blog by Elio Ngjelo and used with permission of the author. All other rights reserved by the author.

A SIMPLE UNDERSTANDING OF SOLID PRINCIPLES.

When I first started writing object-oriented code, I struggled with messy classes, confusing logic, and unexpected bugs from the smallest changes. It felt like no matter how hard I tried to stay organized, something always broke. Learning the SOLID principles completely transformed the way I write code. These five guidelines helped me simplify my projects, make them easier to extend, and create code that finally made sense. If you’re just starting out, I hope this breakdown helps you the way it helped me.

1. Single Responsibility Principle (SRP)

A class should only have one job. That’s it.
When one class tries to do everything ,handle data, print reports, manage files, and validate input ,it becomes fragile. Changing one responsibility risks breaking another. I used to write huge classes thinking it would “keep things together,” but it only created chaos. Once I started separating responsibilities into smaller classes, everything became easier to understand and debug.

2. Open–Closed Principle (OCP)

Your code should be open for extension but closed for modification.
This principle protects working code from unnecessary edits. Instead of constantly changing old methods, you extend behavior through new classes or strategies. It’s like adding a new room to a house without tearing down the entire structure. OCP helped me stop rewriting code that already worked and start building on top of it safely.

3. Liskov Substitution Principle (LSP)

Child classes should be usable anywhere the parent class is expected without breaking the program.
This matters when using inheritance. If a subclass changes behavior in a way that surprises the rest of the program, it violates LSP. Understanding this helped me avoid “clever” inheritance tricks that only made my code harder to maintain.

4. Interface Segregation Principle (ISP)

Don’t force classes to depend on methods they don’t use.
Large interfaces lead to confusing, overloaded classes. Smaller, more focused interfaces keep your code clean and prevent unnecessary dependencies. ISP taught me that more interfaces and not fewer can actually make a system easier to manage.

5. Dependency Inversion Principle (DIP)

Depend on abstractions, not concrete classes.
This principle makes your code flexible and testable. By depending on interfaces instead of specific implementations, you can swap parts of your system without rewriting everything. DIP made testing and updating my code so much easier.

In conclusion, the SOLID principles aren’t just theory, they truly make your projects cleaner and more maintainable. You don’t need to master them overnight. Start applying one principle at a time, and soon your code will naturally become more structured, scalable, and beginner-friendly. If I could learn it, you absolutely can too.

References:

https://www.freecodecamp.org/news/solid-principles-explained-in-plain-english/

From the blog CS@Worcester – MY_BLOG_ by Serah Matovu and used with permission of the author. All other rights reserved by the author.

Improving your API documentation using Swagger and OpenAPI

OpenAPI and Swagger are huge tools that software developers use every day. It is vital to use in order to build clear, maintainable, and interactive API documentation. The article I chose was named “How to improve API documentation with Swagger and OpenAPI”. The article explains that APIs are central to modern software design, and their documentation plays a critical role in ensuring that developers can consume and maintain them correctly. The article argues that using the OpenAPI Specification combined with the Swagger ecosystem brings standardization to REST API documentation that is very needed. It also explains that the OpenAPI Specification is readable by people and machines and explicitly defines an API’s structure along with its endpoints, parameters, responses, and data models. This standardization helps teams avoid ambiguity that often comes from loosely documented APIs. 

There are also many tools that come with swagger such as the editor, UI, codegen, and inspector. The editor lets developers create and edit OpenAPI definitions in JSON or YAML, with built-in validation so syntax errors can be caught immediately. The UI turns OpenAPI definitions into documentation that users can try out API endpoints from their web browsers. The codegen generates client libraries, server stubs, and SDKs that help to speed up the development process on different platforms. Finally, inspector is a tool for testing APIs directly and generating OpenAPI definitions based on existing APIs. 

There is also a recently updated version with the official release of OpenAPI 3.0 allowing more modularity and an approach to defining the surface area of an API. This approach provides more versatility when describing the API request and response model. The latest version also reinforces the importance of good schema and component reuse, as well as multipart document handling.

The reason I chose this topic was because we have been doing a lot of work with swagger and APIs and I wanted to look closely into how vital it is to be a software developer in the real world. I also wanted to look closer into how swagger can improve my design skills. After reading this article I started to see why proper documentation isn’t just something that is nice and handy, but a necessity in being a skilled developer. From now on I plan to strengthen my understanding of swagger and APIs as I believe that it will also help me in landing a job in the future.

https://www.techtarget.com/searchapparchitecture/tip/How-to-improve-API-documentation-with-Swagger-and-OpenAPI?utm_source=chatgpt.com

From the blog Thanas CS343 Blog by tlara1f9a6bfb54 and used with permission of the author. All other rights reserved by the author.

CS343: The Lost Art of Refactoring

Papa Bear, is it true the humans put their noodles in one bowl, their vegetables in another, and the broth in a third, unrelated one?

This week I read a blog article from Martin Fowler’s blog titled “Refactoring: This Class is Too Large” by Clare Sudbury. Find that post here.

What was the blog about?

Sudbury walks us through how she would refactor a poorly-written (real-life example) codebase and the common mistakes people make when developing code that ultimately precipitate rounds of refactoring.

Why do we refactor?

Our code evolves. At first, as Sudbury puts it, what most people have are big “entrance halls” to their problems; big main methods and such where a jumble of not-necessarily related but nevertheless heavily coupled code sits together for the sake of convenience and because we have, at that point, only a shaky or infirm grasp of what architecture our code will have in the future. That’s fine–for a start, and for a time.

Problems arise when we keep trying to entertain in the “entrance hall”. We need to refactor in order for our structural conventions to even continue to make sense.

Why did I choose this article?

I need to be better and more strategic more about refactoring, and having a ton of visual references (the examples) paired with the reinforcements of best practices helps tremendously.

There are also other considerations we haven’t talked about in-class that Sudbury talks about in more detail, such as how to structure our refactors within a series of commits that make logical sense for anyone looking from the outside in, so her blog is doubly worth reading for those extra touches alone.

What are the steps to refactoring?

For the most part, these steps are well-founded in the context of our course and should be pretty easily understood in that sense. In short, though, we can think of refactoring out a method from a parent class as having six distinct steps.

  1. Organize methods and related code into distinct regions–more of a logistical than an architectural point, but keeps with what we’ve learned in this course. Code that changes together stays (and collapses) together.
  2. Verify dependencies and class relationships of the method to be refactored using diagrams or similar tools–again, tracks with what we’ve learned. This is exactly the use case of UML class and sequence diagrams.
  3. Clean-up methods that stay in the “entrance hall”–we’ll keep some parts of our method in our main class, but with appropriate changes, since methods that they might have invoked may now be sitting elsewhere.
  4. Create a new class to contain the refactored method and test it in the simplest terms possible (tiny steps principle).
  5. Build more test coverage for the new class (TDD).
  6. Move method(s) to the new class.

We repeat for as many methods with unique behaviors as there are in the code.

How does this relate to the course material?

We learned that, in the course of developing a problem, we might have code structures that become obsolesced or which are consolidated into others, i.e. composition. And we’ve done refactoring a little, as with the DuckSimulator code from one of our homeworks. But what we haven’t looked at is how to actually systemize this process of refactoring so that it becomes second-nature to us and the steps taken become intuitive rather than a feat of mental gymnastics. If we can’t conceptualize the process of refactoring as an organic evolution of our codebase, we are doomed to stay in cycles of bad code and refactoring, bad code and refactoring, etc. For my own sake and for that of my professional career, I’d better learn to refactor more.

It’s not just about making unit tests.

Kevin N.

From the blog CS@Worcester – Kevin D. Nguyen by Kevin Nguyen and used with permission of the author. All other rights reserved by the author.

Week 3-CS 343 blog: REST API DEVELOPMENTS AND BEST PRACTICES

Rest API is something that really interesting and fun to work with. It enables communication between different software systems over the internet, typically using HTTP protocol. However, Rest API can be difficult sometimes due to its complex queries and filtering, and also batch operations and side effects, etc…Good thing is I went through this blog written by “Medium”. They explained some of the good tips for us to practice with REST API calls. I will walk through their ideas and plans to help us be better at REST API.

Here is their blog https://medium.com/epignosis-engineering/rest-api-development-tips-and-best-practices-part-1-9cbd4b924285

  1. Planning
    • Do research first: Study existing REST API designs, standards, and popular APIs. Consider whether REST is the right paradigm, but also explore alternatives like GraphQL.
    • Look at other APIs: Try working with well-known APIs (GitHub, Stripe, Twitter, Paypal) to understand what work and what doesn’t

2. Foundations Matter

  • A solid early foundation avoids costly refactors later.
  • Assume the API will grow: design for scale, future endpoints, versioning, pagination, analytics, etc.

3. Specification

  • Write an API spec before coding
  • Use tools like OpenAPI/Swagger for designing your API contract
  • Specification pays off – especially for APs that are not just internal

4. Testing

  • Critical for APIs: because they connect server data with clients, they need to be very reliable
  • Don’t rely solely on manual testing – build an automated test suite
  • Focus on functional (black-box) tests, not just unit tests
  • Use a test database that can be reset; include regression tests for past bugs

5. Deploymemt

  • Decouple your API from other server apps: keep the API as a separate module.
  • Why? So updating or deploying one part doesn’t risk breaking everything else.
  • Independent deployments make development and operation safer and simpler.

6. Other Good Practices

  • Be consistent in resource naming: choose either singular or plural for your endpoints (/car vs /cars), but don’t mix.
  • For PUT or PATCH requests, return the updated resource in the response so clients know its new state.
  • Avoid using multiple forms of authentication or session mechanisms: for example, don’t mix custom tokens with default PHP session cookies (PHPSESSID) — it leads to confusion.
  • Don’t leak internal errors (e.g., SQL errors) to API consumers. Log the details internally, but return a generic 500 error externally for security reasons.

Why This Matters

  • The article is very practical: instead of rehashing REST theory, it focuses on avoiding pitfalls the author has personally encountered.
  • By planning, specifying, versioning properly, and testing early, you build a more stable and maintainable API.
  • A thoughtful deprecation strategy and good error-handling also improve reliability and developer experience for your API clients.

From the blog CS@Worcester – Nguyen Technique by Nguyen Vuong and used with permission of the author. All other rights reserved by the author.

Backend Development for a future Frontend Developer

After spending the past few weeks learning about REST API calls, microservices and backend management, there were a lot of aspects I had trouble understanding and even more topics that I wanted to dive deeper into. However, I had trouble nailing down a specific aspect of the entire backend web ecosystem and inner workings. I realized I was struggling to understand the overall architecture of a website and how each aspect connected to itself. Enter: “Backend web development – a complete overview”, a brief, high-level, but informative and easy to understand snapshot of how a website processes information, the software it requires, and how they all connect. 

In the video created by youtuber SuperSimpleDev, he provides a comprehensive introduction to the concepts behind the server-side of web applications. I was able to understand that a website’s backend is described as a request-response cycle, where a user sends a message or “request” to a server and revives a response back. This communication is managed by an API, which helps to outline the types or requests a user is allowed to make. Much like in class, the video touched upon the use of REST API as a common naming convention and a standard for APIs, where specific request types match actions like POST for creating or GET for retrieving data.

When it comes to the actual infrastructure, the video notes that modern backend development relies heavily on cloud computing, where companies rent virtual machines from providers like AWS or Google Cloud instead of owning physical servers. This can be managed manually as Infrastructure as a Service (IaaS) or through Platform as a Service (PaaS) tools that manage infrastructure automatically. As apps get bigger, developers may then design a network of microservices to split a large backend into smaller, specialized services or utilize Software as a Service (SaaS) for specific functions. 

This video was very helpful when it came to expanding on many different topics within backend web services. For technologies like cloud computing, it helped me understand the usefulness of these services and how they benefit web developers when it comes to managing many databases and servers. AWS specifically has been something that I have been considering looking into on my own self learning journey as I aim to become a web developer. While I would like my focus to be more front end development centered, I would like to be well versed in the technologies and design of all aspects of web development. This would help make me more well rounded and create better sites that appeal to both employers and potential clients. 

From the blog Anna The Dev by Adrianna Frazier and used with permission of the author. All other rights reserved by the author.

FOSS -> HFOSS

https://timreview.ca/article/399

Free and open source software (FOSS) has been a focus of our curriculum this semester. Our introduction to working with and managing the Theas Food Pantry software has provided us with experience contributing to this FOSS as well as understanding the benefits of these types of programs. The article Humanitarian Free and Open Source Software  by Chamindra de Silva dives into the important subsection of FOSS, one that focuses on humanitarian efforts and how it can benefit those in need or those suffering from a disaster event; otherwise described as “software engineers without borders”. A key aspect of this type of software – and the reason I chose to highlight this article – is that it inspires collaboration from others in the community, with the goal to make the program better, evolve it, and offer it as a tool for others to use and benefit from; therefore upholding the key principals of FOSS.

The article explores technology’s role when it comes to humanitarian efforts, it creates software to help address rapid-onset natural disasters, slow-onset natural disasters, and human instigated disasters. Many of these models support efforts to aid in managing large scale data management problems that would otherwise often overwhelm traditional information gathering methods. Humanitarian goals and values align with many in FOSS’s mission; to be available to the public to serve a common good, to rely on the support of those in a community to help others in that community, and lower cost in order to leave more funds for aid. 

However, outside of their similarities, HFOSS requires more attention to certain “best practices” due to the nature of importance within the community it is serving. It demands a product be of high quality and relatively stable because system failures can result in information loss that affects life-saving efforts. The software is required to be highly intuitive with an easy learning curve, while also being resilient to many types of political and economical and ecological environments. 

Humanitarian free and open source software has been of interest since our initial introduction to it in class. Having been aware of and utilized FOSS implementations, I was unaware of this specific movement. The idea that I can connect uniting people in our world together by helping them during times of crisis with software development and my career in tech, appeals to me greatly. Knowing that I could potentially help create software that connects people with lost loved ones (Sahana), or host maps of previously unmapped areas to help first responders (OpenStreetMaps), helps me open my mind to evergrowing opportunities afforded to me in my new career. It also gets me thinking about ways to create and tie in a HFOSS project of my own that relates to my current career in Veterinary Medicine.

From the blog Anna The Dev by Adrianna Frazier and used with permission of the author. All other rights reserved by the author.

CS343: The Dark Days of Design (and DDD)

This week I read a blog post from Robert Laszczak of Three Dot Labs titled “Software Dark Ages”. You can find that post here.

If the Dark Ages happened a long time ago, why are people still using Eclipse?

What was the article about?

Laszczak recounts his experience working for a company heavily ladened with pretty much every single kind of organizational, institutional, architectural, and technical flaw or burden out there. In his retelling, even something as simple as changing the functionality of a button would take months because of extremely high code coupling–among myriad other problems.

The way he solved this problem was to use DDD, or domain-driven design. DDD, like the similarly named test-driven development (TDD), places foci on particular problem domains and builds up around them. In TDD, these foci would have been our tests and test coverage, which we would then use to build up our code against. In DDD, we start with the problem and come up with a solution pattern that perfectly encapsulates the problem and its who/what/why/whatever.

If this sounds deceptively intuitive, that’s because it probably is. As Laszczak points out, DDD and problem-solutions like it have existed for almost two dozen years at this point (DDD’s whitepaper dates to 2003). The reason why they’ve fallen into obscurity is because people have forgotten the enlightened ideas of antiquity like they’re living in the Dark Ages.

In [the] Software Dark Ages, infrastructure is putting down important software design techniques.

The problem is, Laszczak bemoans, that the modern software DevOps team is more concerned with achieving technical marvels (e.g. getting that Atlassian Certified t-shirt) than spending their brainpower on something as simple as considering the wheel. Technology is great, except why are we using it? What purpose does it have, not only for us, but for our customers?

The reason we engage with structural criticism is to gain a better understanding of the system at play. Look at 432 Park Ave in NYC and you might see what a poor grasp of domain causes for a project, and when engineers are aware of the customer and their problems but choose to ignore both because… it was easier?

432 Park Ave: A beautiful marble-like white concrete exterior dominates the New York City skyline…
…except for the bughole voids and the hilariously slapdash Blu-Tak patch-and-fill job. A beautiful mess.

Why does DDD matter & why did I choose this post?

We have many frameworks for tackling software problems and organizing work, but very infrequently do we stop to consider the intricacies of the problem and how we can shape our project to meet the problem’s needs (inputs) rather than our organization’s abilities (outputs). Having high productivity and a strong team culture is important, but what does that mean if our codebase is in a terrible shape and our customer’s final product isn’t right-sized to their needs?

In our class, we’ve begun to work with Thea’s Pantry, and one of the things we’ve touched upon time-after-time is that the state of the system is specifically tailored to the needs of the pantry. Not all pantries–not a lot of pantries–the code we are looking at is the way it is because someone or something at Thea’s Pantry needed it to be that way.

That’s what thinking in terms of DDD yields us.

What not thinking about DDD yields:

  • 432 Park Ave
  • The Torment Nexus (NB DevOps: not an Atlassian product, don’t try it!!!)

What I think: DDD is a mouthful.

Kevin N.

From the blog CS@Worcester – Kevin D. Nguyen by Kevin Nguyen and used with permission of the author. All other rights reserved by the author.