Quarter-4 Blog Post

For my last quarter blog post, I decided to explore the topic of documentation. Even though we didn’t talk about it directly in class, we were actually working with it during many of our activities, like working with REST, learning how the frontend and backend communicate, and practicing the commands needed to run both sides of a project. Most times the owners of a product aren’t technical, and because of that, engineers end up having to create their own documentation so everyone can understand how things work. All of our in class activities became much easier when we had clear and easy-to-follow documentation in front of us. The blog I chose for this post, written by Mark Dappollone, goes deeper into the topic by giving tips that help both the person building the product and anyone trying to understand how the software works.

Mark’s blog explains three main ideas about documentation that people should understand. The first is that the only way to create good documentation is to just get started and write something. He points out that when there is no documentation, people are forced to rely only on the code and if the code is wrong or changes, no one would know because there’s nothing written down to compare it to. So even if your documentation isn’t perfect, it’s still better than nothing. Without it, clarity disappears and developers end up guessing what’s going on.

The second idea is the importance of using visual aids. One of my favorite examples in the post was when he included a screenshot and labeled each field with the API calls and the data fields they come from. This instantly shows how everything connects to backend data through REST, which is exactly what we practiced in class. With visuals like that, developers can read and understand the logic behind GET or POST requests without having to guess.

The third and final idea is keeping documentation updated. In class, we noticed that even small changes to the code, like changing where data/input is coming from for, can make things confusing very quickly. Without good documentation, things get messy for anyone working on the product. Keeping documentation in sync with changes to both the backend and frontend helps avoid those misunderstandings.

In the future, I can definitely see myself working with API calls or making frontend fixes, and using documentation will help me understand the whole picture. It also ensures that anyone reading the documentation can see how everything ties together. I would especially like to use visual aids because I’m a visual learner, and I think many people are too. Using visuals can help clear up confusion and make the whole development process smoother.

Source: https://mdapp.medium.com/how-to-document-front-end-software-products-e1f5785279bb

From the blog Mike's Byte-sized by mclark141cbd9e67b5 and used with permission of the author. All other rights reserved by the author.

Software Licensing

For this blog entry, I chose to review the blog “Software Licensing – Terms and Licenses Explained” by CrystalLabs because software licensing connects to what we’ve been learning and working with in class. The blog explains what software licensing is, how it works, and why choosing the right license is essential for developers and organizations. It also covers the differences between proprietary software, open-source software, and other license types that decide whether people can modify or redistribute code. This ties directly into our course discussions on ownership, copyright law, and the responsibility developers have when sharing their work.

The blog begins by defining a software license as a legal agreement that outlines how software can be used, shared, or changed. Proprietary software licenses are restrictive, stopping users from seeing source code or redistributing the program. Most commercial software uses this model. The blog then explains that open-source software gives users more freedom, but the level of freedom depends on the specific license. Permissive licenses like MIT or BSD allow people to reuse the code in pretty much any way they want, including in commercial applications. Copyleft licenses like GPL require that any modified version remains open-source under the same license, protecting long-term openness. The blog also briefly touches on public-domain software, where the creator essentially gives up all rights.

I selected this resource because software licensing is something that is extremely important but I didn’t fully understand. Since we are studying software licensing in class, learning how and when I can use someone else’s code is crucial. It isn’t just about making software work, developers must consider legal compliance and respect for the original author’s intentions.

What stood out to me was how much power a license has over the future of a project. One document can determine whether software becomes a community project or remains locked behind a licensing wall. Now I understand that licenses shape the project itself, being an essential part of the start of any work. This made me more aware for future software that I work on later down the line, I will need to purposely choose a license that matches my goals.

Overall, this resource helped me better understand software licensing as something tied to ethics, law, and user rights. I’m definitely more prepared to navigate licensing in my future work, the software I use and create will be properly licensed with this knowledge in mind.

https://crystallabs.io/software-licensing

From the blog CS@Worcester – Coding with Tai by Tai Nguyen and used with permission of the author. All other rights reserved by the author.

The Impact of Artificial General Intelligence on Frontend and Backend Work

I’ve been hearing about artificial general intelligence (AGI) a lot lately, so I investigated how it’s beginning to affect the day-to-day work of frontend and backend engineers. Since clean design, architecture, and concepts like SOLID and DRY are the major topics of our course, I was curious in how these fundamentals might evolve as AGI advances. What I found is that AGI does not diminish the significance of smart engineering – it enhances it.

With just a prompt, AGI tools are getting better at creating visual elements, layouts, and even a complex relationship. For frontend developers, this means less time spent creating repeated markup and more time thinking about user experience, accessibility, and smooth interactions. Instead of hand-crafting every item, engineers will guide AGI-generated components and refine them, much like examining a merge request in GitLab. AGI may generate the first version, but the developer decides what is appropriate for production.

The impact is even greater for backend engineers. AGI is capable of writing controllers, creating REST API endpoints, building database structures, and even producing metadata. However, backend systems depend largely on architecture, security, error management, and scalability – areas where AGI still needs human guidance. A developer must still apply clean principles, prevent code smells, and create connected components. Similar to how pipelines, CI/CD, and merge approvals protect the main branch in a GitLab workflow, backend engineers must examine each AGI-generated change to ensure system stability.

One thing that sticks out to me is how AGI transforms the developer’s position from “code writer” to “system thinker.” Instead of entering every line manually, developers will focus on verifying logic, detecting edge cases, defining patterns, and structuring interactions. This is consistent with our understanding of UML diagrams and architectural styles: humans define the structure, while AGI can produce its parts. Using GitLab as an example makes this obvious. Even if AGI generates code on a feature branch, the developer still analyzes the merge request, reviews pipeline results, and ensures the update matches project requirements. AGI can aid, but it cannot replace human expertise in maintaining clean design, secure APIs, or reliable backend logic.

Overall, I concluded that frontend and backend duties are not vanishing – they are developing. While developers will focus more time on design, problem-solving, moral decision-making, and long-term maintainability, AGI will automate routine tasks. Understanding ideas like abstraction, encapsulation, and GRASP patterns will remain crucial since AGI operates best under strong human leadership.

References

  • OpenAI. AI and Software Development Workflows (2023).
  • GitLab Documentation. Merge Requests & CI/CD Pipelines.
  • Bostrom, N. Superintelligence: Paths, Dangers, Strategies. (2014)

From the blog CS@Worcester – Pre-Learner —> A Blog Introduction by Aksh Patel and used with permission of the author. All other rights reserved by the author.

Licensing Your Projects

Sources: Codecademy, TLDR Legal, and Choose A License

Choosing a License

All projects should have a license, but choosing a license for your project can seem daunting. By breaking it down into simple goals you want to achieve, the process can be much easier! 

First, choose how you want your users to interact with your project. Do you want to preserve your original code in future versions or do you want users to be free in how they modify it? There are two main kinds of licenses to choose from, permissive and copyleft. 

Permissive licenses do not restrict the user from modifying the code or using it how they like. Some popular examples are the MIT license and Apache license. Both of these licenses state the user can modify, distribute, and sublicense the code how they want, and do not require future versions to use the same license. 

Copyleft licenses preserve the original code and protect the creators and users. Some examples of a copyleft license are General Public license (GPL) and Lesser General Public license (LGPL). Both of these licenses allow the users to modify and distribute the code how they like, but require the same license to be used on all future versions and require user protections like install instructions and a change log. 

One license is not better than another, all that matters is that your project has a license and it’s what is best for you. If you are still unsure of which one to choose, try looking at the licenses of similar projects or projects from people like you. 

Why Do You Need a License

Licenses state the rights of you as a creator and of all the people that interact with your code. Without a license, users do not know where you stand on using, modifying, and sharing your work. Attaching a license allows you to have peace of mind your code is being used how you want it to and allows people using your code to have it clearly laid out what they can and cannot do with your work. 

Choosing a License For Your Code’s Documentation

Now that you have a project with a license, you need to apply a license to your documentation as well. Most of the time, it will use the same license you chose for the code by default, but in the case you do not want it to, you can choose any license that you feel applies.

From the blog ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

Backends for Frontends and Modern Web Architecture

For this week’s blog post I wanted to go with the “Backends for Frontends” (BFF) architecture pattern through Microsoft Azure’s documentation(https://learn.microsoft.com/en-us/azure/architecture/patterns/backends-for-frontends), having read the article in-depth since it gave me some food for thought.

The article explains how BFF works as an API design approach where each type of client; such as a web application, mobile app, or IoT device kind of communicates with its own tailored backend service. Instead of one universal backend trying to serve every kind of client, BFF splits the responsibilities across dedicated backend components. This lets each frontend receive data in exactly the structure and format it needs, without unnecessary complexity or over-fetching. The article also describes when BFF is useful, the problems it solves (like avoiding “one-size-fits-all” backend design), and its trade-offs, such as increased cost and operational overhead.

I guess I chose this resource considering how BFF connects directly to what we’ve been doing in CS-343 with REST APIs, Express endpoints, data validation using OpenAPI, and separating layers of an application. During the recent advanced assignment(referring to the REST API Back End Implementation homework of course), I had to build new endpoints, interact with multiple collections, and send changes to a message broker. That required some time of me thinking carefully about how different parts of the system communicate with each other.

The biggest takeaway for me was how the BFF pattern embraces specialization. Instead of making one giant backend do everything, you just got to let each client have a backend that fits them. This avoids exposing unnecessary fields, reduces client-side processing, and improves performance. It also aligns well with microservices, because each BFF can orchestrate multiple services behind the scenes. I realized that the structure of our assignments- frontend consuming API endpoints, backend handling logic, and MongoDB representing the persistence layer; is essentially a simplified version of this architectural thinking.

This article also made me reflect on situations where BFF might solve real problems i could probably encounter in the near future. For example, in our backend assignment, if we had a separate mobile client, its data needs would be very different from the web interface that interacts with full guest objects. A BFF layer could format responses appropriately, avoid over-fetching, and simplify the logic on the client side. The idea of tailoring backends to frontends also helps me make sense of why large systems separate responsibilities; it keeps things maintainable and avoids tightly coupled components.

So in short, i’d say that this resource strengthened my understanding of how backend architecture is shaped by client needs. It also helped me see the bigger picture behind concepts we’ve been using: validation, routing, endpoint design, and database access. Going forward, I expect this perspective to help me structure backend code more intentionally, especially when designing APIs that may evolve to support multiple types of clients.

From the blog CS@Worcester – CSTips by Jamaal Gedeon and used with permission of the author. All other rights reserved by the author.

Understanding Linters: Enhancing Code Consistency

Recently in class we started to learn more about linting tools. In the class we used a linter to check all documents in a project to see which words are redundant or misspelled. After looking into some articles online I realised that linters could be used for many things. For example, if a company wants its developers to write code within a certain format so that other developers in the company can easily read it. It is not just for formatting it can be used for coding errors, bugs, security vulnerabilities and stylistic inconsistencies. In addition, linters can be used for many different types of programming languages that allows organizations or groups to set a standard for everyone coding in that project. 

A linter is able to do all of this by dividing the code into units for variables, types, functions etc. Then to make these units into tokens that then compare these tokens to the tokens already made in the linter. Next it checks if these tokens are different to the premade ones and will flag them depending on the reason the linter is used for that project. 

There are a lot of reasons why projects use linters. Here are a few examples: it decreases errors, it makes code more consistent, improves code quality, improves security, saves money by being time efficient, and setting coding expectations with the team. Let me explain further: no project or company wants their program to have errors that make it towards the end user. In addition, linters being able to save money is by checking the code to prevent issues from costing more time to fix and more money to fix. 

Many companies want to save as much money as possible within the different steps of development: production, design, testing, development and maintenance. Each of these steps requires a lot of people to verify if the code is working or if the code is what the customer wants. In addition, having developers be able to not spend as much time finding errors and solving them. That is why linters can save companies a lot of money for the next project or maintain operations. 

I do have a problem with linters personally, when I am focused on doing something. Then I have to use a linter, it makes me lose focus and creates friction to get back to focusing on the task at hand. This a minor problem because once I get used to using a linter then there will be less friction between focusing on the task at hand and solving errors caught by the linter.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

The Role of Compilers in Programming Explained

In one of my classes this semester we started to learn about the purpose of a compiler in programming. After learning about a few things on how a compiler works, I just wanted to spend some free time learning more about the subject. Even though there are many different compilers out there currently they all still use the same steps on processing high level language to machine coding. The following steps are lexical analysis, Syntax analysis, Semantic analysis, Optimization, Code generation. 

  • Lexical Analysis: In this step it sends the high level code through a compiler’s lexer changing certain parts of the code like operators, identifiers into units that make up tokens. 
  • Syntax analysis: During this step it looks at all the tokens and checks the code for syntax errors and other programming errors within that coding language conditions. 
  • Semantic Analysis: Once the code is checked the compiler uses semantic analysis in order to determine the purpose of the code. In addition it tests for logical errors with the code for example the code could have an arithmetic error. 
  • Optimization: Within this step optimizations with a compiler are not always the same depending on what they want to optimize. For example, have the code run all the steps as quickly as possible or decrease the electrical demand for a coding language. 
  • Code generation: Once the code goes through this step the code will be converted into assembly code so that the computer can read the instructions needed to run the program. 

The convenience of using a compiler is that it allows programmers to code in high level language which is a lot more readable than assembly code. In addition it allows programmers to be able to learn any high level language. Which allows them to not worry about the steps needed to convert the code into assembly since the compiler will do it for them. Also having the compiler check for multiple different types of errors helps improve with quality assurance. Another factor to consider, certain hardware can only run specific types of code but with a compiler it lets programmers choose which language they prefer. 

Also compilers can reduce repetition due to only needing to run the code once and then it will be able to execute repeatedly from then on. Lastly Compilers can check for errors that we do not consider for example memory leaks and potential security issues with the code. 

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Sprint 3

For the third sprint of the course, I was in charge of merging an environment path fix and updating the version in the docker-compose on the server, demonstrating the working system to OSILD (the department that runs the food pantry), and getting the docker-compose file to run when the server is started up.

These are the links to the tickets related to this sprint:

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/deployment/deployment-server/-/issues/9

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/deployment/deployment-server/-/issues/8

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/guestinfofrontend/-/issues/113

The task of getting the docker-compose file to run when the server was started required some research on what were the best methods to get this done. During one of the course meetings, we had attempted to create a file in the server that had terminal instructions that would get this done but this was unsuccessful. The simple solution ended up being to add “restart: unless-stopped” for each container in the docker-compose file. 

The apprenticeship pattern I chose for this sprint was Record What You Learn. This pattern discusses the frustration of learning something that doesn’t seem to stick, then dealing with the frustration from the deja vu of doing that task and relearning again and again. I chose this apprenticeship pattern because I tend to fall into it relatively often and I have started to develop the habit of writing down new skills to remember them better. Due to the fact that I am in the process of developing this habit, it is further difficult to fall into this pattern of relearning because it is followed with the thought that I really should have written the process down. 

This pattern is suitable for this sprint because of how the meeting with OSILD (the department who runs the food pantry) went. I had worked on this issue and it was working right before I went to present and right as I sat in the room with the two women who I was trying to demonstrate the server to, the server would not start up. I was able to joke it off and explain what was supposed to happen but there was something frustrating and on the spot about not being able to start it up that made me wish that I had written down, step by step, how to start up the server and how to troubleshoot it. Once I left the meeting, I sat with the server open and troubleshooted what was going on, using f12 to open the web page and look at the error messages and do some research as to what could have caused the problem. I thought it was because the backend had a new version and wasn’t updated so I updated the container in the docker-compose and it seemed to work. Later we found out that the reason the server wasn’t responsive is because there was a delay and a portion of the code was preventing it from loading if it took too long. I wrote all of this down in my notebook, like the pattern recommended and since then I have referred back to it many times. 

If I had read this pattern prior to the this sprint, I would have taken more diligent notes and made sure to better track my steps so I would be able to reproduce them under more stressful conditions.

From the blog CS@Worcester – The Struggle of Being a Female Student in CS by Noam Horn and used with permission of the author. All other rights reserved by the author.

REST API Design.

The Stack Overflow Blog post “Best practices for REST API design” caught my eye (March 2, 2020). Blog for Stack Overflow. The article explains what a REST API is and why it’s important to design it correctly. It then gives specific advice, like accepting and responding in JSON, making sure that endpoint paths are made up of nouns instead of verbs, using plural nouns for collections, allowing filtering, sorting, and panning, handling errors correctly, creating versions of your API, using standard HTTP status codes, and enforcing security and caching. The main idea is that APIs should be easy to understand, reliable, adaptable to the future, and simple for other coders to use.

As I learn more about financial systems, data analytics, and making tools that can reveal or use services, I need to know how to build and use backend components in a clean way. The piece caught my attention because it is both theoretical and useful, and it is written in a clear way that connects what developers expect now with what is best practice. Because the business systems I work on in the future might use APIs to connect billing, collection, or data analysis tools, this kind of advice seems very useful.

I felt a lot of things reading this. First, I liked how the focus was on using resources (nouns) instead of verbs in URL paths. This is something I have seen broken in older systems (for example, /getUserDetails or /updateBillingInfo), which shows right away that the design isn’t clear. “Use nouns instead of verbs in endpoint paths,” the paper says. Blog for Stack Overflow. This idea makes things clearer: when I make or use an API, I want to be able to look at endpoints and immediately understand what the system models (users, orders, and invoices), not what action is being taken. In REST, the action is taken care of by the HTTP verb.

Another thing is that the article talks about standard error codes, versioning, filtering, sorting, and pagination. These are important for growth and maintainability but are often forgotten when speed is the most important thing. When I build an API for a payment module today without planning for versioning, for example, clients may not work with future changes (like adding new fields or changing the way things work), which causes problems between teams.

Basically, I learned or reinforced the idea that designing an API isn’t just about “making something that works.” It’s also about making something that other developers can use easily and that can grow. This fits with one of my user goals, which is to be correct and organized with systems, documentation, and interfaces. A well-designed API makes things clearer, cuts down on mistakes, and allows for future growth, which is a lot like how I focus on detailed paperwork and efficient financial systems.

ReferenceStack Overflow Blog. (March 2, 2020). The best ways to build a REST API. https://stackoverflow.blog/2020/03/02/best-practices-for-rest-api-design/

From the blog Site Title by Roland Nimako and used with permission of the author. All other rights reserved by the author.

Quarter 3

As I am approaching the end of the semester, I have been doing a lot of group activities on GitLab with my teammates. Switching roles and getting to know one another more while taking on the task we needed to. Working on Exam 2 and finishing that to access our knowledge then learning more about the different types code people write and how it can affect the entire team.

For the final few days of the semester, we will continue to put our skills to the test with these various challenges and fix some of our old code issues. I’ll finish off this semester strong!

From the blog cs@worcester – Software Development by Kenneth Onyema and used with permission of the author. All other rights reserved by the author.