Software Frameworks

One day we played with “Hello world”, a couple arrays, and some text input from the user, and the next, we messed with a small terminal-based game, and ran an algorithm on some data. Up to this point, much, if not all, of the software most students have produced has been a relatively small, fairly local program, communicating with no more than a few other files within the same application, or to a tool like Docker.

When it comes to building applications with feature-sets that a business needs to operate smoothly, and ultimately make money, the scale and complexity of the applications being developed grows rapidly. Developing software to fully satisfy business requirements can be a very complicated and tedious process, and that is just the reality of engineering software.

In light of the challenges of efficiently and reliably producing software that works for a business’ use-case, developers have come up with many ways to streamline the process. One of the most popular ways to do this is to utilize a “framework”. To gain a better understanding of what a framework is in software engineering, I read through a blog post on Contentful, linked here:

https://www.contentful.com/blog/what-is-a-framework/

A programming framework is essentially a pre-built skeleton for an application. There are many different types of framework, those for the frontend, the backend, servers, mobile, and whichever it is, ultimately the goal is always the same; almost every application ever made is going to use these exact same functionalities, so now they are just bundled together for us to extend with our specific business logic.

Naturally, having a ton of parts of your application already put together and ready to be extended comes with major advantages, and unsurprisingly, a couple potential disadvantages.

The most significant upsides for frameworks come in the form of faster development, fewer accidental set-up errors, built-in security, and a baseline ground-level for applications, so all developers can know we’re working with a “Django” framework, for example. All of these benefits are massive, for both developers, and businesses.

The downside with a framework is that it’s already put together as it should be, and you probably shouldn’t change that skeleton, leaving you with less freedom to develop the application exactly as you might like. It also requires onboarding developers into that framework, as even if someone has worked with Python before, using the Django framework requires learning some more specific implementation methods and best practices related to everything Django is, not just Python.

I’ve enjoyed reading about software frameworks, as even though I’d heard of Angular, or Ruby on Rails, I didn’t fully understand what exactly was so important about these frameworks, and why so many of them were present on job postings everywhere. After reading about them, the answer couldn’t be more obvious. Professional developers will utilize these tools to improve and streamline their workflows, and that’s that!

From the blog CS@Worcester – KeepOnComputing by CoffeeLegend and used with permission of the author. All other rights reserved by the author.

Understanding REST API

Hello everyone

This week’s blog will focus on REST APIs, a topic we’ve been working with heavily in class and one that, admittedly, has left me in awe because I have realized how useful it is. Writing these blogs has actually been helping me a lot in understanding the topics we go over in class in a deeper level. Some new topics can be overwhelming at firsts but writting the blogs allows me to break down the complex concepts and understand them easier. While looking for different blogs that went over this topic, I found one that really stood out and grabbed my attention.

The author of the blog begins the blog with a clear definition of what REST API is, and this helps the reader to easies its way into the blog. I like how he stated the definition of REST API and then later one he rewords that definition in simpler terms. This allows readers who are new to REST API to not get overwhelmed and get scared by technical words. This is something that I personally value a lot and appreciate it when authors do that for us the readers. He then continues by mnentioning the six Principles of REST and follows it up by explaining each principle. For each principle he does an amazing job at explaining the core idea behind it without overwhelming the reader. He keeps it simple, not too long and follows it up with great examples making it simpler to understand each concept. I can’t explain every principle here but I will write a single sentence for each of them and what helped me the most in understanding them. Let’s first start by listing the six principles of REST API which are: Uniform Interface, Client-Server, Stateless, Cacheable, Layered System and Code on Demand. So the uniform interface constraint represents the idea that all components in a system should follow a general, consistent method of communication. The client-server principle emphasizes separating interface concerns from data storage so apps can be more portable and servers simpler. Statelessness means each request from a client must contain all the information needed to be understood—nothing is stored between requests. Cacheability allows clients to reuse certain response data to improve performance. A layered system restricts each component to interacting only with the layer directly beneath it, helping maintain structure and scalability. Finally, code-on-demand allows servers to extend a client’s functionality by sending executable code, though this one is optional as mentioned by the author but still a nice touch that he added it.

In conclusion, this blog was very informative and did an amazing job at teaching me more about REST API and helped me see the purpose of it, which is to make applications simpler, more scalable, and easier to maintain—for both clients and developers. The more you get into software programming the more you appreciate stuff like this and now I can even understand them which is the best part of learning!

Source: https://restfulapi.net/

From the blog Elio's Blog by Elio Ngjelo and used with permission of the author. All other rights reserved by the author.

A SIMPLE UNDERSTANDING OF SOLID PRINCIPLES.

When I first started writing object-oriented code, I struggled with messy classes, confusing logic, and unexpected bugs from the smallest changes. It felt like no matter how hard I tried to stay organized, something always broke. Learning the SOLID principles completely transformed the way I write code. These five guidelines helped me simplify my projects, make them easier to extend, and create code that finally made sense. If you’re just starting out, I hope this breakdown helps you the way it helped me.

1. Single Responsibility Principle (SRP)

A class should only have one job. That’s it.
When one class tries to do everything ,handle data, print reports, manage files, and validate input ,it becomes fragile. Changing one responsibility risks breaking another. I used to write huge classes thinking it would “keep things together,” but it only created chaos. Once I started separating responsibilities into smaller classes, everything became easier to understand and debug.

2. Open–Closed Principle (OCP)

Your code should be open for extension but closed for modification.
This principle protects working code from unnecessary edits. Instead of constantly changing old methods, you extend behavior through new classes or strategies. It’s like adding a new room to a house without tearing down the entire structure. OCP helped me stop rewriting code that already worked and start building on top of it safely.

3. Liskov Substitution Principle (LSP)

Child classes should be usable anywhere the parent class is expected without breaking the program.
This matters when using inheritance. If a subclass changes behavior in a way that surprises the rest of the program, it violates LSP. Understanding this helped me avoid “clever” inheritance tricks that only made my code harder to maintain.

4. Interface Segregation Principle (ISP)

Don’t force classes to depend on methods they don’t use.
Large interfaces lead to confusing, overloaded classes. Smaller, more focused interfaces keep your code clean and prevent unnecessary dependencies. ISP taught me that more interfaces and not fewer can actually make a system easier to manage.

5. Dependency Inversion Principle (DIP)

Depend on abstractions, not concrete classes.
This principle makes your code flexible and testable. By depending on interfaces instead of specific implementations, you can swap parts of your system without rewriting everything. DIP made testing and updating my code so much easier.

In conclusion, the SOLID principles aren’t just theory, they truly make your projects cleaner and more maintainable. You don’t need to master them overnight. Start applying one principle at a time, and soon your code will naturally become more structured, scalable, and beginner-friendly. If I could learn it, you absolutely can too.

References:

https://www.freecodecamp.org/news/solid-principles-explained-in-plain-english/

From the blog CS@Worcester – MY_BLOG_ by Serah Matovu and used with permission of the author. All other rights reserved by the author.

Improving your API documentation using Swagger and OpenAPI

OpenAPI and Swagger are huge tools that software developers use every day. It is vital to use in order to build clear, maintainable, and interactive API documentation. The article I chose was named “How to improve API documentation with Swagger and OpenAPI”. The article explains that APIs are central to modern software design, and their documentation plays a critical role in ensuring that developers can consume and maintain them correctly. The article argues that using the OpenAPI Specification combined with the Swagger ecosystem brings standardization to REST API documentation that is very needed. It also explains that the OpenAPI Specification is readable by people and machines and explicitly defines an API’s structure along with its endpoints, parameters, responses, and data models. This standardization helps teams avoid ambiguity that often comes from loosely documented APIs. 

There are also many tools that come with swagger such as the editor, UI, codegen, and inspector. The editor lets developers create and edit OpenAPI definitions in JSON or YAML, with built-in validation so syntax errors can be caught immediately. The UI turns OpenAPI definitions into documentation that users can try out API endpoints from their web browsers. The codegen generates client libraries, server stubs, and SDKs that help to speed up the development process on different platforms. Finally, inspector is a tool for testing APIs directly and generating OpenAPI definitions based on existing APIs. 

There is also a recently updated version with the official release of OpenAPI 3.0 allowing more modularity and an approach to defining the surface area of an API. This approach provides more versatility when describing the API request and response model. The latest version also reinforces the importance of good schema and component reuse, as well as multipart document handling.

The reason I chose this topic was because we have been doing a lot of work with swagger and APIs and I wanted to look closely into how vital it is to be a software developer in the real world. I also wanted to look closer into how swagger can improve my design skills. After reading this article I started to see why proper documentation isn’t just something that is nice and handy, but a necessity in being a skilled developer. From now on I plan to strengthen my understanding of swagger and APIs as I believe that it will also help me in landing a job in the future.

https://www.techtarget.com/searchapparchitecture/tip/How-to-improve-API-documentation-with-Swagger-and-OpenAPI?utm_source=chatgpt.com

From the blog Thanas CS343 Blog by tlara1f9a6bfb54 and used with permission of the author. All other rights reserved by the author.

Blog 3 – Understand clean code

Coding is just like writing an essay, it requires a logical structure, clear message, and readability so others can understand it. That’s why we need “Clean Code” in every project of programming. Clean code refers to code that’s easy to read, understand, and maintain. The ultimate goal is not just working software, but software that remains clean and maintainable throughout its lifecycle. So, how do we write clean code?

According to the Codacy article “What Is Clean Code? A Guide to Principles and Best Practices” (https://blog.codacy.com/what-is-clean-code). They provide a good explanation about clean code and how do we make the code become more understandable for others to read, and also help us to improve more in coding skill.

Why Clean Code Matters

  • Readability & Maintenance: Clear code helps developers (including new ones) understand and navigate the codebase faster. blog.codacy.com
  • Team Collaboration: When code follows shared, clean practices, it’s easier for team members to read each other’s work and contribute. blog.codacy.com
  • Debugging: Clean structure (good names, simple functions) makes it easier to isolate and fix bugs. blog.codacy.com
  • Reliability: By adhering to best practices, you reduce the chances of introducing bugs and make the code more stable and reliable. blog.codacy.com

Key Principles & Best Practices

The article outlines several principles that help make code clean:

  1. Avoid Hard-Coded Numbers
    • Use named constants instead of “magic” numbers so their meaning is clear and changeable.
  2. Use Meaningful Names
    • Choose variable, function, and class names that reveal their intent and purpose. blog.codacy.com
    • If a name needs a comment to explain it, the name itself is probably too vague.
  3. Use Comments Wisely
    • Don’t comment the obvious. Instead, use comments to explain why something is done, not what.
  4. Write Short, Single-Purpose Functions
    • Functions should do one thing (following the Single Responsibility Principle).
    • When functions become long or handle multiple tasks, break them into smaller ones.
  5. Apply the DRY Principle (“Don’t Repeat Yourself”)
    • Avoid duplicating logic; reuse code via functions, modules, or abstractions.
  6. Follow Code-Writing Standards
    • Use consistent formatting, naming conventions, and style according to your language’s community or team guidelines. blog.codacy.com
    • Examples include PEP 8 for Python or common JavaScript/Java style guides.
  7. Encapsulate Nested Conditionals
    • Instead of deeply nested if/else logic, move conditional logic into well-named helper functions — improves readability and reusability.
  8. Refactor Continuously
    • Regularly revisit and clean up your code. Leave it in a better state than when you found it.
  9. Use Version Control
    • Track your changes using a version control system (like Git). It helps with collaboration, rolling back changes, and safer refactoring.

Automate Clean Code Practices

  • Codacy recommends using its tools (static code analysis, duplication detection, code metrics) to automate enforcement of clean-code rules as you write.
  • This helps catch code-quality issues and security vulnerabilities early, keeping the codebase maintainable and high-quality. blog.codacy.com

Mindset Over Rules

  • Clean code is more than following a checklist — it’s a mindset and a commitment to quality.
  • The article argues for writing code not just to work, but to be read and maintained by humans.

From the blog CS@Worcester – Nguyen Technique by Nguyen Vuong and used with permission of the author. All other rights reserved by the author.

CS-348 Quarter 3 Blog Post

For Quarter 3 I’ve chosen this article written by Ting Yu from the The Brink, at Boston University.
https://www.bu.edu/articles/2022/how-copyrights-patents-trademarks-may-stifle-creativity-and-progress/

This article was written in August of 2022. It establishes and idea that law has not been able to keep up with the development of the digital era. This idea, proposed by Jessica Silbey, an expert on constitutional and intellectual property law argues that current law does nothing to advance the public’s creativity and ability to make society better for the collective, instead society lined up today to empower individuals and corporations. In other words, Silbey explains that the idea of individual copyright and trademarking of ones own work is more an empowerment to exclude, making copyright and trademark law seem more on the offensive than defensive.

The reason I decided that this article fit the bill for the semester is of course its relevance to our topics surrounding copyright law and trademarks of our work as programmers and developers. But at the same time I chose it for its interesting take on the implications of copyright law and trademarks on the creativity of the public.

Pulling down from the summary of the article about the empowerment to exclude being used to describe copyright and trademarks, at first it felt like a weird take but the more I thought about it the more it made sense to me. While at face value the idea of copyright and trademarks is to protect the intellectual property of whoever created said property, on the grand scheme of things, especially in a world where you can instantly contact someone from the other side of the planet in an instant it does feel more like a trademark plays the role of a bouncer at the entrance of a club, letting in select people and excluding others. Although the people not being the problem but the intentions of said people as trademarks and copyright determine what one can do with an intellectual property. Something specific that Silbey brings up that to me shows the severe issue with current law is the example about how in days long past, it was usually the inventor of something who would own the patent to said thing, but in todays world its teams of people working all towards a single goal, usually in competition with other companies. Leaving copyright and trademarks usually in the hands of the company the team is operating under depending on contract stipulations. For example the battle between Microsoft, Sony and Nintendo to be the next big innovator in the gaming industry creates hostile work environments powered by profit and quotas. Copyright and trademarks usually held tight with an iron fist by these companies. So while I do need to give this idea more thought I definitely think Silbey has a strong point that shes making for us.

From the blog CS@Worcester – Splaine CS Blog by Brady Splaine and used with permission of the author. All other rights reserved by the author.

CS343: The Lost Art of Refactoring

Papa Bear, is it true the humans put their noodles in one bowl, their vegetables in another, and the broth in a third, unrelated one?

This week I read a blog article from Martin Fowler’s blog titled “Refactoring: This Class is Too Large” by Clare Sudbury. Find that post here.

What was the blog about?

Sudbury walks us through how she would refactor a poorly-written (real-life example) codebase and the common mistakes people make when developing code that ultimately precipitate rounds of refactoring.

Why do we refactor?

Our code evolves. At first, as Sudbury puts it, what most people have are big “entrance halls” to their problems; big main methods and such where a jumble of not-necessarily related but nevertheless heavily coupled code sits together for the sake of convenience and because we have, at that point, only a shaky or infirm grasp of what architecture our code will have in the future. That’s fine–for a start, and for a time.

Problems arise when we keep trying to entertain in the “entrance hall”. We need to refactor in order for our structural conventions to even continue to make sense.

Why did I choose this article?

I need to be better and more strategic more about refactoring, and having a ton of visual references (the examples) paired with the reinforcements of best practices helps tremendously.

There are also other considerations we haven’t talked about in-class that Sudbury talks about in more detail, such as how to structure our refactors within a series of commits that make logical sense for anyone looking from the outside in, so her blog is doubly worth reading for those extra touches alone.

What are the steps to refactoring?

For the most part, these steps are well-founded in the context of our course and should be pretty easily understood in that sense. In short, though, we can think of refactoring out a method from a parent class as having six distinct steps.

  1. Organize methods and related code into distinct regions–more of a logistical than an architectural point, but keeps with what we’ve learned in this course. Code that changes together stays (and collapses) together.
  2. Verify dependencies and class relationships of the method to be refactored using diagrams or similar tools–again, tracks with what we’ve learned. This is exactly the use case of UML class and sequence diagrams.
  3. Clean-up methods that stay in the “entrance hall”–we’ll keep some parts of our method in our main class, but with appropriate changes, since methods that they might have invoked may now be sitting elsewhere.
  4. Create a new class to contain the refactored method and test it in the simplest terms possible (tiny steps principle).
  5. Build more test coverage for the new class (TDD).
  6. Move method(s) to the new class.

We repeat for as many methods with unique behaviors as there are in the code.

How does this relate to the course material?

We learned that, in the course of developing a problem, we might have code structures that become obsolesced or which are consolidated into others, i.e. composition. And we’ve done refactoring a little, as with the DuckSimulator code from one of our homeworks. But what we haven’t looked at is how to actually systemize this process of refactoring so that it becomes second-nature to us and the steps taken become intuitive rather than a feat of mental gymnastics. If we can’t conceptualize the process of refactoring as an organic evolution of our codebase, we are doomed to stay in cycles of bad code and refactoring, bad code and refactoring, etc. For my own sake and for that of my professional career, I’d better learn to refactor more.

It’s not just about making unit tests.

Kevin N.

From the blog CS@Worcester – Kevin D. Nguyen by Kevin Nguyen and used with permission of the author. All other rights reserved by the author.

Open Source Licencing

Throughout my Computer Science degree, I have contributed to and collaborated on projects that have been posted to Github or GitLab. I have also utilized, downloaded, and sometimes even shared school material that was a Free and Open Source Software (FOSS). In the past, I have never given much thought to my right to the material I accessed, how it was legally allowed to use it, and what practices needed to be in place to protect my own works that I have posted on various version control management sites. In this class we are exploring licenses and copyrights when it comes to any project code that  an individual produces, and the legalities behind the use, alteration and distribution of said works. 

In this video Open Source Licence Types by creator Pro Tech Show dives into open source licenses specifically. This specific area of copyright law when it comes to code is important for me to understand because as someone who will constantly use these sites to host my projects, and as someone who plans to contribute to or create certain HFOSS projects, I need to have a good grasp on how I can go about using others works and sharing my own code. 

This video simplifies the over 100 open source licenses by grouping them into five broad categories based on how they affect the user and the copyright owner. These categories are Public domain, permissive licence, weak copyleft, strong copy left and stronger copy left. The most interesting part of this video was its explanation of the automatic copyright of All rights reserved and how public domain waives all of those rights. I thought this was interesting because many people may be ignorant to the automatic copyright placed on their code that has been posted on github. One may have done so with the intention for it to be shared and collaborated on. There is, however, would require the use of public domain where it would act as an absence of a licence and may be more along the lines of what the author had intended.  It is not only important to know how licensure works as the author, but also as the user. Again, one may assume that they have a Public Domain to access, download, and mutate to code found on git hub, however, I should be taking more care to examine the specific licensing under each project. This would help alleviate any legal issues down the road while also getting me more familiar with the different types of licensing and why types of projects require which licence.

From the blog Anna The Dev by Adrianna Frazier and used with permission of the author. All other rights reserved by the author.

Writing Cleaner Code: Breaking Out of the Student Mindset

https://www.geeksforgeeks.org/blogs/tips-to-write-clean-and-better-code/

Most of our time during our college career and learning how to create working and usable code, there was not really a strong emphasis on how to write “clean code”. Sure, best practices, industry/language standards, and formatting was explained, however, there is another aspect of code legibility and readability that is important to understand for everyone writing code. Our class this semester explores a more indepth view of what industry standards are to be followed and even helps us unlearn some basics (like comments) that we were utilizing in our code.

This article from GeeksforGeeks outlines seven key tips for writing clean, maintainable, and efficient code. It emphasizes that writing good software goes beyond just making it work; the code must be easy to read, understand, and change, as developers spend significantly more time reading code than writing it. The article indicates that adhering to certain practices leads to reliable, scalable software that is easier to debug and maintain, ultimately creating better collaboration among developers 

Some of these practices/principals are ones that we, as students, have already learned to adopt such as using meaningful names for methods and variables, as well as learning how to organize our projects, specifically when it comes to object oriented programming. A new take away that I will be more aware of is how descriptive the names are. I used to think that overly long variable names were “bad practice”, however, for the sake of readability and general understanding of the code, longer, more descriptive variable names may be indicated. 

When it comes to practices that were a new concept for me, the utilization of comments in code was an important one to unlearn. The article notes that code should be self-explanatory through clear syntax and naming. Comments should only be used when absolutely necessary, rather than stating the obvious. Another aspect of clean coding that helped me alter the way I will continue to code is the inclusion explaining how methods should only be used for a single purpose. This method, otherwise called the Single Responsibility Principal, notes that functions and classes should only do one thing and do it well. They should be small and focused, avoiding nested structures or too many arguments  

This article was important because it bridges the gap between “student code” (written for a class assignmentt) and “professional code” (written to be read and maintained by a team). Understanding how to write clean code will help immensely make all of your projects look more professional, as well as help you with technical interviews. Adopting these habits signals to employers that you aren’t just a coder who is able to put together a small school project, but a software engineer who builds sustainable, high-quality products

From the blog Anna The Dev by Adrianna Frazier and used with permission of the author. All other rights reserved by the author.

Week 3-CS 343 blog: REST API DEVELOPMENTS AND BEST PRACTICES

Rest API is something that really interesting and fun to work with. It enables communication between different software systems over the internet, typically using HTTP protocol. However, Rest API can be difficult sometimes due to its complex queries and filtering, and also batch operations and side effects, etc…Good thing is I went through this blog written by “Medium”. They explained some of the good tips for us to practice with REST API calls. I will walk through their ideas and plans to help us be better at REST API.

Here is their blog https://medium.com/epignosis-engineering/rest-api-development-tips-and-best-practices-part-1-9cbd4b924285

  1. Planning
    • Do research first: Study existing REST API designs, standards, and popular APIs. Consider whether REST is the right paradigm, but also explore alternatives like GraphQL.
    • Look at other APIs: Try working with well-known APIs (GitHub, Stripe, Twitter, Paypal) to understand what work and what doesn’t

2. Foundations Matter

  • A solid early foundation avoids costly refactors later.
  • Assume the API will grow: design for scale, future endpoints, versioning, pagination, analytics, etc.

3. Specification

  • Write an API spec before coding
  • Use tools like OpenAPI/Swagger for designing your API contract
  • Specification pays off – especially for APs that are not just internal

4. Testing

  • Critical for APIs: because they connect server data with clients, they need to be very reliable
  • Don’t rely solely on manual testing – build an automated test suite
  • Focus on functional (black-box) tests, not just unit tests
  • Use a test database that can be reset; include regression tests for past bugs

5. Deploymemt

  • Decouple your API from other server apps: keep the API as a separate module.
  • Why? So updating or deploying one part doesn’t risk breaking everything else.
  • Independent deployments make development and operation safer and simpler.

6. Other Good Practices

  • Be consistent in resource naming: choose either singular or plural for your endpoints (/car vs /cars), but don’t mix.
  • For PUT or PATCH requests, return the updated resource in the response so clients know its new state.
  • Avoid using multiple forms of authentication or session mechanisms: for example, don’t mix custom tokens with default PHP session cookies (PHPSESSID) — it leads to confusion.
  • Don’t leak internal errors (e.g., SQL errors) to API consumers. Log the details internally, but return a generic 500 error externally for security reasons.

Why This Matters

  • The article is very practical: instead of rehashing REST theory, it focuses on avoiding pitfalls the author has personally encountered.
  • By planning, specifying, versioning properly, and testing early, you build a more stable and maintainable API.
  • A thoughtful deprecation strategy and good error-handling also improve reliability and developer experience for your API clients.

From the blog CS@Worcester – Nguyen Technique by Nguyen Vuong and used with permission of the author. All other rights reserved by the author.