Category Archives: CS-343

API Security Practices

As we continue to study API implementation in my CS343 class, I find it important to continue to learn more about the subject. I came across this article that discusses 11 security practices for API implementation. A statistic I found interesting that is brought up at the beginning of the article is that APIs now account for more than 80% of internet traffic. The first practice listed is continuous API discovery. This means automating processes that catalog APIs across your infrastructure in real time. Doing this helps prevent the deployment of shadow APIs. The next practice is encrypting traffic in every direction. This one seems fairly obvious, but it is something that cannot be forgotten. The article also lists some key encryption requirements. Authentication and authorization are very important practices for API implementation as well. This allows for control over the usage of the APIs in the system, and it protects the user by keeping their primary credentials hidden to prevent theft and misuse. Using the Principle of Least Privilege is another good practice. Granting users the least amount of privilege necessary to use the system prevents damage from a malicious user. Documentation is also very important to help other developers maintain the system in a safe way. You must also validate your data to help protect against injection attacks. Make sure to limit data exposure as well. Data breaches are an issue that no one wants to deal with. The article lists a few more practices that are important for security in systems that use APIs.

This particular article stuck out to me because, as I said in the beginning, we are learning about API implementation in my CS343 class. Knowing the typical security practices for implementing APIs is imperative for creating a good system. It’s something that many may overlook, but cannot be omitted. I myself had failed to consider the potential security risks of implementing APIs. This article gave good insights on the security risks and how to prevent them. This was definitely worth the time it took to read and understand. It is always important that the data going through your system is protected to ensure that the users have a problem free experience. The problems that could possibly arise from a lack of security could result in huge losses for a company deploying these systems. As someone who has not had to deal with that kind of risk yet, I believe it is important I learn about these potential issues before it becomes too late.

https://www.wiz.io/academy/api-security-best-practices

From the blog CS@Worcester – Auger CS by Joseph Auger and used with permission of the author. All other rights reserved by the author.

REST API Design

For this blog, I chose the blog “REST API Design Best Practices Guide” from PostPilot. I chose this resource because our course directly works with REST API, focused on backend development, and how different software components communicate through REST API. Since REST APIs are one of the most common ways to build websites, I felt it was important to reinforce what I learned in class about how to work with REST API.

The blog explains the core principles behind REST, including having a stateless server, a client server relationship, caching, and interfaces. It then goes into the basics for building APIs that are easy to understand. One of the first practices it suggests introduces designing resource based URLs using nouns instead of verbs and making them simple and plural such as users or orders. It also emphasizes the importance of properly using HTTP methods, like GET for retrieving information, POST for creating data, PUT or PATCH for updates, and DELETE for removal. The guide goes deeper by discussing when to use path parameters versus query parameters, especially for filtering data and explains why good responses help developers with context and confusion.

Something I found especially insightful was the explanation on versioning and error handling. The blog explains why APIs should include versioning so that future updates don’t break existing clients. It also explains why providing consistent and descriptive error responses are important so that developers can debug what went wrong rather than guessing. It also talks about security requirements such as using HTTPS and authentication tokens, as well as using tools like Swagger to make the API easier for other developers to use.

This blog improved my understanding after reading it, I’ve noticed that good planning and design are crucial, especially when multiple people use the same system. I noticed that the HTTP method already conveys the idea, so keeping endpoints focused on resources makes everything cleaner.

Hopefully, I can apply what I learned in future API work, particularly when working with backend development. This blog’s information will definitely be helpful when designing good and useful APIs rather than messy and unreadable designs. This blog has made me more aware and informed of how important API design is, not just for functionality but for maintainability and ease of use. Reading this guide along with that I’ve picked up in class gives me a strong foundation to understanding REST APIs, if not at least be a little more informed on the topic..

https://www.postpilot.dev/blog/rest-api-design-best-practice-cheat-sheet

From the blog CS@Worcester – Coding with Tai by Tai Nguyen and used with permission of the author. All other rights reserved by the author.

Quarter-4 Blog Post

For my last quarter blog post, I decided to explore the topic of documentation. Even though we didn’t talk about it directly in class, we were actually working with it during many of our activities, like working with REST, learning how the frontend and backend communicate, and practicing the commands needed to run both sides of a project. Most times the owners of a product aren’t technical, and because of that, engineers end up having to create their own documentation so everyone can understand how things work. All of our in class activities became much easier when we had clear and easy-to-follow documentation in front of us. The blog I chose for this post, written by Mark Dappollone, goes deeper into the topic by giving tips that help both the person building the product and anyone trying to understand how the software works.

Mark’s blog explains three main ideas about documentation that people should understand. The first is that the only way to create good documentation is to just get started and write something. He points out that when there is no documentation, people are forced to rely only on the code and if the code is wrong or changes, no one would know because there’s nothing written down to compare it to. So even if your documentation isn’t perfect, it’s still better than nothing. Without it, clarity disappears and developers end up guessing what’s going on.

The second idea is the importance of using visual aids. One of my favorite examples in the post was when he included a screenshot and labeled each field with the API calls and the data fields they come from. This instantly shows how everything connects to backend data through REST, which is exactly what we practiced in class. With visuals like that, developers can read and understand the logic behind GET or POST requests without having to guess.

The third and final idea is keeping documentation updated. In class, we noticed that even small changes to the code, like changing where data/input is coming from for, can make things confusing very quickly. Without good documentation, things get messy for anyone working on the product. Keeping documentation in sync with changes to both the backend and frontend helps avoid those misunderstandings.

In the future, I can definitely see myself working with API calls or making frontend fixes, and using documentation will help me understand the whole picture. It also ensures that anyone reading the documentation can see how everything ties together. I would especially like to use visual aids because I’m a visual learner, and I think many people are too. Using visuals can help clear up confusion and make the whole development process smoother.

Source: https://mdapp.medium.com/how-to-document-front-end-software-products-e1f5785279bb

From the blog Mike's Byte-sized by mclark141cbd9e67b5 and used with permission of the author. All other rights reserved by the author.

The Impact of Artificial General Intelligence on Frontend and Backend Work

I’ve been hearing about artificial general intelligence (AGI) a lot lately, so I investigated how it’s beginning to affect the day-to-day work of frontend and backend engineers. Since clean design, architecture, and concepts like SOLID and DRY are the major topics of our course, I was curious in how these fundamentals might evolve as AGI advances. What I found is that AGI does not diminish the significance of smart engineering – it enhances it.

With just a prompt, AGI tools are getting better at creating visual elements, layouts, and even a complex relationship. For frontend developers, this means less time spent creating repeated markup and more time thinking about user experience, accessibility, and smooth interactions. Instead of hand-crafting every item, engineers will guide AGI-generated components and refine them, much like examining a merge request in GitLab. AGI may generate the first version, but the developer decides what is appropriate for production.

The impact is even greater for backend engineers. AGI is capable of writing controllers, creating REST API endpoints, building database structures, and even producing metadata. However, backend systems depend largely on architecture, security, error management, and scalability – areas where AGI still needs human guidance. A developer must still apply clean principles, prevent code smells, and create connected components. Similar to how pipelines, CI/CD, and merge approvals protect the main branch in a GitLab workflow, backend engineers must examine each AGI-generated change to ensure system stability.

One thing that sticks out to me is how AGI transforms the developer’s position from “code writer” to “system thinker.” Instead of entering every line manually, developers will focus on verifying logic, detecting edge cases, defining patterns, and structuring interactions. This is consistent with our understanding of UML diagrams and architectural styles: humans define the structure, while AGI can produce its parts. Using GitLab as an example makes this obvious. Even if AGI generates code on a feature branch, the developer still analyzes the merge request, reviews pipeline results, and ensures the update matches project requirements. AGI can aid, but it cannot replace human expertise in maintaining clean design, secure APIs, or reliable backend logic.

Overall, I concluded that frontend and backend duties are not vanishing – they are developing. While developers will focus more time on design, problem-solving, moral decision-making, and long-term maintainability, AGI will automate routine tasks. Understanding ideas like abstraction, encapsulation, and GRASP patterns will remain crucial since AGI operates best under strong human leadership.

References

  • OpenAI. AI and Software Development Workflows (2023).
  • GitLab Documentation. Merge Requests & CI/CD Pipelines.
  • Bostrom, N. Superintelligence: Paths, Dangers, Strategies. (2014)

From the blog CS@Worcester – Pre-Learner —> A Blog Introduction by Aksh Patel and used with permission of the author. All other rights reserved by the author.

Backends for Frontends and Modern Web Architecture

For this week’s blog post I wanted to go with the “Backends for Frontends” (BFF) architecture pattern through Microsoft Azure’s documentation(https://learn.microsoft.com/en-us/azure/architecture/patterns/backends-for-frontends), having read the article in-depth since it gave me some food for thought.

The article explains how BFF works as an API design approach where each type of client; such as a web application, mobile app, or IoT device kind of communicates with its own tailored backend service. Instead of one universal backend trying to serve every kind of client, BFF splits the responsibilities across dedicated backend components. This lets each frontend receive data in exactly the structure and format it needs, without unnecessary complexity or over-fetching. The article also describes when BFF is useful, the problems it solves (like avoiding “one-size-fits-all” backend design), and its trade-offs, such as increased cost and operational overhead.

I guess I chose this resource considering how BFF connects directly to what we’ve been doing in CS-343 with REST APIs, Express endpoints, data validation using OpenAPI, and separating layers of an application. During the recent advanced assignment(referring to the REST API Back End Implementation homework of course), I had to build new endpoints, interact with multiple collections, and send changes to a message broker. That required some time of me thinking carefully about how different parts of the system communicate with each other.

The biggest takeaway for me was how the BFF pattern embraces specialization. Instead of making one giant backend do everything, you just got to let each client have a backend that fits them. This avoids exposing unnecessary fields, reduces client-side processing, and improves performance. It also aligns well with microservices, because each BFF can orchestrate multiple services behind the scenes. I realized that the structure of our assignments- frontend consuming API endpoints, backend handling logic, and MongoDB representing the persistence layer; is essentially a simplified version of this architectural thinking.

This article also made me reflect on situations where BFF might solve real problems i could probably encounter in the near future. For example, in our backend assignment, if we had a separate mobile client, its data needs would be very different from the web interface that interacts with full guest objects. A BFF layer could format responses appropriately, avoid over-fetching, and simplify the logic on the client side. The idea of tailoring backends to frontends also helps me make sense of why large systems separate responsibilities; it keeps things maintainable and avoids tightly coupled components.

So in short, i’d say that this resource strengthened my understanding of how backend architecture is shaped by client needs. It also helped me see the bigger picture behind concepts we’ve been using: validation, routing, endpoint design, and database access. Going forward, I expect this perspective to help me structure backend code more intentionally, especially when designing APIs that may evolve to support multiple types of clients.

From the blog CS@Worcester – CSTips by Jamaal Gedeon and used with permission of the author. All other rights reserved by the author.

The Role of Compilers in Programming Explained

In one of my classes this semester we started to learn about the purpose of a compiler in programming. After learning about a few things on how a compiler works, I just wanted to spend some free time learning more about the subject. Even though there are many different compilers out there currently they all still use the same steps on processing high level language to machine coding. The following steps are lexical analysis, Syntax analysis, Semantic analysis, Optimization, Code generation. 

  • Lexical Analysis: In this step it sends the high level code through a compiler’s lexer changing certain parts of the code like operators, identifiers into units that make up tokens. 
  • Syntax analysis: During this step it looks at all the tokens and checks the code for syntax errors and other programming errors within that coding language conditions. 
  • Semantic Analysis: Once the code is checked the compiler uses semantic analysis in order to determine the purpose of the code. In addition it tests for logical errors with the code for example the code could have an arithmetic error. 
  • Optimization: Within this step optimizations with a compiler are not always the same depending on what they want to optimize. For example, have the code run all the steps as quickly as possible or decrease the electrical demand for a coding language. 
  • Code generation: Once the code goes through this step the code will be converted into assembly code so that the computer can read the instructions needed to run the program. 

The convenience of using a compiler is that it allows programmers to code in high level language which is a lot more readable than assembly code. In addition it allows programmers to be able to learn any high level language. Which allows them to not worry about the steps needed to convert the code into assembly since the compiler will do it for them. Also having the compiler check for multiple different types of errors helps improve with quality assurance. Another factor to consider, certain hardware can only run specific types of code but with a compiler it lets programmers choose which language they prefer. 

Also compilers can reduce repetition due to only needing to run the code once and then it will be able to execute repeatedly from then on. Lastly Compilers can check for errors that we do not consider for example memory leaks and potential security issues with the code. 

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Delving Deeper into Rest API

I have been learning more about REST API. Currently I have been working on backend implementation. Working on this is somewhat difficult. I understand how to implement new endpoints to get certain objects, but I have trouble in other things. It mostly has to do with the current homework.

So, I decided to learn more about it. I went to a blog from Stack Overflow since it is well known for having useful information about coding. If you need a question answered or a working piece of code you go there. I have used it before, and it was helpful. Hopefully, by the end I get a better understanding of the material and can utilize that newfound knowledge in the homework.

I used this: Best practices for REST API design – Stack Overflow

The article discusses the best practices for REST API design. It discusses using logical nesting to group associated information, and handling errors gracefully. It also discusses allowing filtering, sorting and paginating data. They are utilized to help deal with large data by only outputting the data that is needed. Let’s say there is 400 pieces of data and you only need 4%. By filtering, you can get that 4% without dealing with a slowdown and having to parse through data to see the 4%. The article also discusses versioning APIs, maintaining good security practices, caching data and minor things like using nouns instead of verbs in endpoints. It also discusses the importance of JSON and how it can be used for transferring data.

There is useful information but the one that stood out to me was the one about filtering, sorting and paginating data. There is a part of my homework that requires filtration. I need to make sure that when guest age is called, depending on the way GET is structured the resulting list is greater than, less than or equivalent to the chosen value. That section in the article gave me an idea. Like the example given in that section, I could use an if statement to filter out results. This may get the result I want.

To conclude, this article has been really useful to me. It has helped me understand what I have to do for a part of my homework assignment. It was interesting to learn ways to improve my REST API design. I may not know if you I will use this information in the future, but I know that I will use it now.

From the blog CS@Worcester – My Journey through Comp Sci by Joanna Presume and used with permission of the author. All other rights reserved by the author.

Strategic Web Systems Implementation: Key Principles for Modern Applications

For this week’s professional development blog, I delved into the article “Modern Web Implementation: Best Practices & Strategies” from NoPassiveIncome. The piece provides an actionable, business-oriented framework for building web systems that are performant, maintainable, and user-centric. Given our course focus on web architecture, design, and deployment, this article offers vital insights that closely align with our syllabus.

Summary of the Article

The article begins by establishing the importance of robust web implementation, arguing that effective execution translates design into a reliable, high-performance website. It underscores that a poorly implemented system can degrade user experience, SEO, and long-term maintainability.

Key elements of strong web implementation are described in detail:

  • User Experience (UX): The article emphasizes intuitive navigation, clean design, and accessibility, asserting that a stellar UX fosters engagement and retention.
  • Responsive Design: Given the ubiquity of mobile devices, implementation should prioritize fluid layouts that perform seamlessly across a range of screen sizes.
  • SEO Best Practices: From keyword placement to meta tags, the article recommends embedding SEO considerations into the implementation phase rather than treating them as an afterthought.
  • Performance Optimization: Techniques such as image compression, code minification, and browser caching are explored to minimize load times and maximize responsiveness.

The article also addresses common challenges, such as cross-browser compatibility and mobile optimization and offers practical solutions like extensive testing and responsive design. It highlights emerging trends in web implementation, including progressive web apps (PWAs), voice search optimization, and AI-powered chatbots. Finally, the piece recommends essential tools like Google Analytics for behavior tracking, SEMrush for SEO analysis, and Bootstrap for streamlined responsive development.

Why I Chose This Resource

I selected this article because it directly aligns with our Implementation of Web Systems course. We’ve been discussing architecture, design trade-offs, and the practicalities of building real-world web applications. This resource synthesizes those academic topics into concrete, industry-ready guidelines.

Moreover, as I prepare for future roles where I may design, maintain, or deploy web systems, I want a strategic understanding of implementation that goes beyond coding, one that balances technical best practices with user-centered concerns.

What I Learned & How It Enhances My Understanding

This article reinforced for me that web implementation is more than just writing code, it’s about strategic execution. The discussion of UX taught me that simplicity and accessibility matter just as much as backend logic. When I build web systems in class, I plan to pay greater attention to how every design decision affects usability.

I was also struck by the emphasis on SEO during implementation. In class, SEO sometimes feels secondary, but this article made clear how deeply it should influence how we build, structure, and implement web pages.

Finally, the performance optimization section resonated strongly with what we’ve learned about efficient web architecture. Minimizing asset size and leveraging browser caching are practical techniques that I aim to use in future projects to ensure speed and reliability.

How I Will Apply These Insights

In upcoming web development assignments and real-world projects, I plan to:

  • Start with user experience design and accessibility, not just backend features.
  • Build always with responsive layouts and test across devices.
  • Treat SEO as an integrated component of implementation, not an afterthought.
  • Implement performance optimizations from the beginning, including image compression, code minification, and caching.
  • Use analytics tools like Google Analytics to track performance and gather actionable user behavior data.

This article has deepened my confidence in building web systems that are not only functional, but also optimized, user-friendly, and future-proof, exactly the kind of approach this course encourages.

Citation / Link
“Modern Web Implementation: Best Practices & Strategies.” NoPassiveIncome, accessed 2025. Available online: https://nopassiveincome.com/modern-web-implementation/

From the blog Rick’s Software Journal by RickDjouwe1 and used with permission of the author. All other rights reserved by the author.

Choosing the Right Open Source License

Choose an open source license

For this week’s self-directed professional development, I explored the topic of choosing open source licenses, which is a fundamental but often overlooked part of releasing software. I based my reading on ChooseALicense.com and supporting resources that explain how permissive licenses, copyleft licenses, and public domain-style licenses differ. What I found most interesting is how a license doesn’t just define legal rules — it reflects the values, intentions, and goals of a developer or a team. Software licensing shapes how a project evolves, how a community forms, and how contributions are handled over time.

Open source licenses fall into a few broad categories. Permissive licenses like MIT or Apache 2.0 give users almost complete freedom to reuse the code, even in closed-source commercial products. Copyleft licenses like GPL ensure that any derivative work must remain open source under the same license. And options like the Unlicense or CC0 place code essentially in the public domain, allowing anyone to use it with zero restrictions. Before this week, I assumed licensing was just a legal formality, but now I understand how strongly each license type influences collaboration and long-term project direction.

I chose this topic because licensing is directly connected to the work we do in Software Process, especially when we talk about transparency, collaboration, and project ownership. As future developers, we will eventually publish our own tools, libraries, or contributions. Knowing how to license our work is part of being a responsible member of the open source community. Many people assume that posting code publicly means anyone can use it, but without a license, nobody can legally copy, modify, or reuse it. That detail alone made this topic worth exploring, and it helped me rethink how important explicit permissions are.

One thing I learned is that choosing a license is really about choosing a philosophy. If a developer wants to share knowledge broadly, enable commercial use, and reduce friction for adoption, a permissive license makes sense. If the goal is to ensure the code stays free for everyone and cannot be closed off by others, a copyleft license protects that intention. The reading made me think carefully about what I would want if I released a personal project. Personally, I lean toward permissive licenses because I want people to build on my work without worrying about legal constraints. But I also understand why larger community-focused projects might choose GPL to preserve openness.

Going forward, I expect licensing to be something I pay more attention to in both school projects and professional work. As software engineers, we’re not just writing code; we’re shaping how others can interact with it. Licensing is part of that responsibility. This topic helped me better appreciate the intersection between technology, ethics, creativity, and law — and it reminded me that releasing software is more than just pushing code to GitHub; it’s about defining how that code fits into the larger ecosystem of open source development.

From the blog CS@Worcester – Life of Chris by Christian Oboh and used with permission of the author. All other rights reserved by the author.

Have Great Documentation with These 9 Steps

Through my past couple blogs of Object Oriented Programming, REST APIs, and frameworks, I often came across the word “documentation” but commonly overthought it and didn’t do any more investigating into it. With that being said, I came across this blog 9 Software Documentation Best Practices + Real Examples and I feel a lot more comfortable with documentation as a whole. https://www.atlassian.com/blog/loom/software-documentation-best-practices

Imagine, you and your developing team are coding away on an application, but then an error appears, by which no one present knows how to solve it.

To solve this error, your options are:

  1. Slack threads
  2. Stack Overflow
  3. Emails back and fourth 
  4. Documentation 

The most promising answer of these options are documentation. With great documentation, you will get a clear precise solution to the problem you are facing. 

Now, what exactly is documentation? Documentation is written information that clearly explains the program’s structure, functionality, and when/how to use it; should be used as a guide or as a reference pointer for developers. If the documentation is done right, it reduces friction, boosts productivity, and helps the user understand the code from all angles due to the information of processes, functionalities, and solutions.

With so many benefits, documentation should be at the forefront of software developing teams, but unfortunately, it often gets overlooked. One of the main reasons for this is because documentation can go outdated very quickly. 

That being stated, here are nine ways to make sure your documentation will always be great:

  1. Know your target audience
    1. General Audience (surface level) vs Developers (technical)
  2. Keep it user-friendly
    1. Having a table of contents with clear headings can make it a lot more easier to find what the user is looking for
  3. Use version control
    1. By doing this, you will be able to see all the latest updates
  4. Incorporate visuals
    1. Some people are visual learners, who work better off of a video, photo and/or diagram (within reason)
  5. Adopt a documentation style guide
    1. Consistent styling will make it easier for the user when going through each documentation type from said releasers (writers/developers)
  6. Update regularly
    1. By doing this, your documentation will align with the latest software functionality
  7. Encourage collaborative documentation
    1. Working together in person or remotely 
    2. Always good to have a peer’s input
  8. Provide troubleshooting and FAQs
    1. This enables users to find solutions for common issues very effectively
  9. Use documentation templates
    1. This will give a great start to all coming contributors of the documentation 

Overall, I really enjoyed this article and its purpose of making software documentation better. Furthermore, it mentioned how Google and Microsoft all stick to the same consistent documentation perspectively, which opened my eyes of how important consistency is in documentation within a company. Along with this, before “jumping” into something new, it will definitely be worth it to go over the documentation. Understanding and referencing any documentation is a skill.

From the blog CS@Worcester – Programming with Santiago by Santiago Donadio and used with permission of the author. All other rights reserved by the author.