Category Archives: CS@Worcester

The Importance of Frontend Development

Frontend development is an essential part of building any website or just an application. It is the part of web development responsible for what the users see and interact with the user interface  and user experience of a website or application. The author explains that the frontend is built using technologies like HTML, CSS, and JavaScript. HTML is the backbone of any webpage because it provides structure and organization with defining things like headings, images, paragraphs, and links. CSS works with HTML by controlling the visual representation of the page and allows developers to customize colors, fonts, and layouts. Meanwhile JavaScript adds interactivity, handling user input, and dynamic behaviors to the page. These all work together The article also outlines typical tools and practices used by frontend developers like code editors, local development servers , version control systems, package managers, and build tools. In addition, it talks about styling techniques, making content responsive, adding interactivity via DOM manipulation and event handling, and using frontend frameworks or libraries to streamline development. Overall the article supports advancing your frontend development skills and makes it much simpler going in as a beginner who doesn’t know much about it beforehand.

The reason I chose this article is because in our class we have used the backend code for Maria’s Pantry many times, but we had never really used the frontend until the very end of the semester. Since we didn’t have as much time to go over it I also found myself very confused at times as to what was the major difference and how important is it really. I mainly just wanted to do my own research and read about it since I felt it would be something extremely important to know as much as possible to any developer.

After reading the blog post I started to realize how important HTML, CSS, and Javascript were to frontend development. They hold up and give structure to the webpage. This post also names some courses you may take to practice frontend development like udemy, coursera, and codecademy that cover most of the basics behind HTML, CSS, and Javascript. I plan to look up on them and try to practice as much as I can cause I would like to work on my skills as much as possible as I believe that it would be valuable to any sort of career I may look forward to in web development.

https://namtheartist95.medium.com/what-is-frontend-development-5fdfdd892464

From the blog Thanas CS343 Blog by tlara1f9a6bfb54 and used with permission of the author. All other rights reserved by the author.

Improving API Documentation

OpenAPI and Swagger are huge tools that software developers use every day. It is extremely important to use when it comes to building clear, maintainable, and interactive API documentation. The name of the article I chose was “How to improve API documentation with Swagger and OpenAPI”. According to this article, APIs are central to modern software design, and their documentation plays a critical role in ensuring that developers can consume them and maintain them correctly. “Using the OpenAPI Specification with the Swagger ecosystem offers the much-needed standardization to REST API documentation”. Besides, it explains that the OpenAPI Specification is human- and machine-readable, it clearly states the structure of the API, its endpoints, parameters, responses, and data models. This in turn helps reduce ambiguity, which often results from loosely documented APIs.

Besides these, there are several tools that come with Swagger, including the editor, UI, codegen, and inspector. It allows developers to design and edit OpenAPI definitions in JSON or YAML with built-in validation so syntax errors can be caught right there and then. The UI presents OpenAPI definitions as documentation that users can try API endpoints from within their web browser. Codegen produces client libraries, server stubs, and SDKs that facilitate rapid development on multiple platforms. Finally, the inspector is a tool for testing APIs directly and generating an OpenAPI definition based on existing APIs.

There’s also an updated version, allowing for more modularity and an approach to defining the surface area of an API, with the official release of OpenAPI 3.0. This provides even greater flexibility in describing the request and response model of an API. Good schema and component reuse are emphasized in the most recent version, and handling of multipart documents. Advertisements.The reason I chose this topic was because we have been doing a lot of work with swagger and APIs, and I wanted to look closely into how vital it is to be a software developer in the real world. I also wanted to look closer into how swagger can improve my design skills. After reading this article, I started to see why proper documentation isn’t just something nice and handy but a necessity in being a skilled developer. From now on, I plan to strengthen my understanding of swagger and APIs because I think it will also help me in improving my coding skills and future project.

Resources: https://www.techtarget.com/searchapparchitecture/tip/How-to-improve-API-documentation-with-Swagger-and-OpenAPI

From the blog Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

Companies and API Security

I recently wrote about an article that discussed important security practices for implementing APIs. Security seems like something that could not be overlooked by major companies, but 65 percent of Forbes AI 50 companies have leaked verified secrets on GitHub. Supposedly, VS Code extensions keep making things such as API keys, tokens, and other digital credentials known when uploaded to GitHub. One of the researchers attributes this partially to “vibe coding”. Recently, it has been discovered that LLMs have been able to give up API keys with certain prompts. With the rise of the usage of LLMs, this can be incredibly dangerous for these companies. Wiz is a security company that sells secret scanning as a service. It was recently purchased by Google for $32 billion. Wiz found that the most common sources of these leaks were Jupyter Notebooks, Python files, and environment files. The keys and tokens found in these leaks are ones that could cause for severe issues for these companies. These keys could also expose training data that may include sensitive business data. Wiz says that multiple companies who were found to have these leaks were notified about the exposures, but half the security disclosures either couldn’t be delivered or received no response. The article ends by saying: “The first step toward solving your secret exposure problem is admitting that you have a problem.”

This was very surprising to me. As I have been learning about APIs, I have realized how important it is to ensure that everything is secure. When I had read the other day about API security practices, I thought that these were standard practices that companies would not omit. When I had seen this article, I figured that I had to read it. It was such a shock to me that high-level tech companies would omit such an important step in API implementation at such a high rate. To make matters worse, many of them did not attempt to clean this issue up. The last line of the article really stood out to me. I couldn’t imagine being a part of such an important company and ignoring such a large issue. As AI continues to grow, I believe the companies that make an effort to keep their systems secure will be the ones that flourish. Overall, this article opened my eyes to a lot. It seems to me that it is good that I am learning about APIs now, as large AI companies still have a lot to learn as well.

https://www.theregister.com/2025/11/10/ai_companies_private_api_keys_github/

From the blog CS@Worcester – Auger CS by Joseph Auger and used with permission of the author. All other rights reserved by the author.

Choosing Code Rules: Navigating Licensing for Health Technology

Understanding software licensing is one of the crucial, non-coding topics that becomes a massive deal the moment you start building real applications. When covering this in my Software Process Management course, I was overwhelmed by the density and intimidation of the numerous license types and legal details. For this blog post, I decided to use this opportunity to strengthen my understanding of these various software licenses. This information is vital for my plans, especially when creating high-integrity therapeutic platforms where security and protecting ideas are non-negotiable. To achieve this goal, I looked at several resources to deepen my understanding.

To make sense of licensing, the best first step is grouping them into two major categories: Proprietary (Closed source) and Open Source. As the resource by Zluri, “3 Major Types of Software Licenses & Its Categories” states, proprietary licenses are the most restrictive type as their whole purpose is to keep the source/base code private. This means, you can only use the software via permission and under strict conditions; you will not be able to edit or distribute the code yourself. For my work, this proprietary model might be necessary to protect the specific clinical methodology or algorithms I develop for future therapeutic platforms. Open source licenses, on the other hand, make the source code publicly available, which is the best for those who are looking for collaboration and efficiency.

However, this resource by BlackDuck states, open source licenses can become overwhelming. To best understand, I grouped the following licenses from easiest to more difficult to handle. Permissive Licenses (i.e., MIT and Apache 2.0) are the easiest to use as they are very flexible and only require you to include the original copyright notice. Copyleft licenses (i.e., GPL) have a strict rule to follow. That is, if you distribute a modified version of software, you must also make your modified source code available under the same copyleft license. If I used a copyleft feature in a therapeutic game, for example, I would be forced to release the source code of my entire game; which is a major factor protecting the core methodology and design. 

Licensing is more about just reading and considering rules, strategic planning is very important with choosing which to work with. It directly impacts ethics, decisions, and compliance of any platform handling sensitive information. If I am to build things like interactive feedback systems or symptom management trackers, I need to know exactly which third-party tools I can safely use. My current task is figuring out that delicate balance to ensure I build professional, compliant, and sustainable software that protects both users and the core therapeutic intervention.  

Main Resources:
3 Major Types of Software Licenses & Its Categorieshttps://www.zluri.com/blog/types-of-software-licenses

Five types of software licenses you need to understandhttps://www.blackduck.com/blog/5-types-of-software-licenses-you-need-to-understand.html

Additional Resources:
Software Licensing Models: Your Complete Guidehttps://www.revenera.com/blog/software-monetization/software-licensing-models-types/

Software License Types, Examples, Management, & More Explainedhttps://finquery.com/blog/software-licenses-explained-examples-management/

From the blog CS@Worcester – Vision Create Innovate by Elizabeth Baker and used with permission of the author. All other rights reserved by the author.

Design Patterns

Hello everyone,

This will be my last blog of the semester and being the last one, I wanted to do something fun. When we went over Design Patterns in class, I was really amazed and I saw myself going back and thinking about it. I was even surprised to see that I had not written a blog about it. I wanted to do a little digging and wanted to talk in this week’s blog about a new Design Patterns that we didn’t cover in class, and the one I am going to talk about is “…drum rolls please…” Adapter Pattern!!!!

The idea behind it is pretty simple and as you can guess it comes from the name, and its general idea of it in software development is identical to the one in the physical world. When you travel abroad and want to charge your phone or laptop but the power outlet has a different shape from what you are used to seeing back home. In a different scenario, you are going to connect your old console to your new TV, but the plugs and outlet are not compatible so what do you do? In both of these cases you are going to use an adapter which allows you to use your own plugs in scenarios where you normally were not able to.The Adapter Pattern has the same principle but it is applied in object-oriented programming. A prime example of when this is used is when you already have a class and want to reuse in a different interface and rather than creating a whole new class just for that specific interface, youuse the Adapter Pattern. You then implement a class that bridges the gap between an expected interface and an existing class. That enables you to reuse an existing class that doesn’t implement a required interface and to use the functionality of multiple classes, that would otherwise be incompatible.The biggest advantage of the Adapter Pattern is that you don’t need to change the existing class or interface. By introducing a new class, which acts as an adapter between the interface and the class, you avoid any changes to the existing code. That limits the scope of your changes to your software component and avoids any changes and side-effects in other components or applications. This will save you a lot of time, effort, and a lot of headaches.

In conclusion be smart like your Adapter Pattern that you will be using, and make the ability to show your work and project to any interface without being held back from interface compatibility

Source: https://medium.com/@akshatsharma0610/adapter-design-pattern-in-java-fa20d6df25b8

From the blog Elio's Blog by Elio Ngjelo and used with permission of the author. All other rights reserved by the author.

Writing Good Documentation

Sources: Ángel Cereijo and GeeksForGeeks

Writing documentation for your project can be tedious and boring. Say you just finished your project, you published it, and you’re excited to see people use it… but they don’t know how. You need documentation to explain your project. 

Software documentation is the writing that goes along with your project, ranging from what your program does, how you plan to build the program, or any notes you may have for your team. There are four types of documentation: requirement, architectural, technical, and end-user. Requirement documentation explains the requirements for the program and how it should perform. Architectural documentation explains how the program should be structured and how processes should work with each other. Technical documentation explains the technical parts of the program, like the API and algorithms. End-user documentation explains how the software works for the user. 

Good documentation should be created throughout the development process. Saving it all until the end will burn you out and set you back. To prevent this, progress the documentation as you build the project. Split your project into smaller pieces, like how it is done in an Agile setting, and improve the documentation as new features are developed. 

Documentation should explain the purpose of the project as well. It should set up a problem that your program solves and why it is useful to solve the problem. Communicate with your team on how the problem is going to be solved and a plan to get there. Then, split the project up into tasks and assign them to the team members. Explain in detail any information needed for each task. Having the research and plan in writing makes it so anyone can complete the task at any time. As tasks are completed, update the documentation to the project before calling it completed. 

Having explanatory documentation also ensures that tests can be written easily, even without the actual code that is being tested. Then, as you write the code for that task, the tests are already completed and you can move on. 

As your team progresses through the project, your initial plan and process will probably change. With new resources and findings, the initial documentation needs to be updated as well. 

Once your project is published, your documentation is there to help the users easily use your program. It needs to be written in simple language for any reader and avoid ambiguity/unneeded repetition. 

Writing documentation is necessary for every project, helping to communicate with your development team and to serve as a guide to the users. It is important to write good documentation and keep in mind how other people will use it. Having the basics down of writing good documentation will help me in the future and give me a step up as a new software developer. 

From the blog ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

Stay Educated

Let’s explore software process management through a domain that often gets overlooked in technical discussions (even though it’s unlikely in my opinion): education systems. To do this, the UNESCO article “Education Management Information Systems Progress Assessment Tool: A Methodological Guide for Educational Transformation.” The article explains how structured processes, data governance, and continuous improvement cycles help educational institutions operate more “effectively and equitably”.

The UNESCO guide introduces the Education Management Information Systems Progress Assessment Tool (EMIS‑PATT), a framework designed to help ministries of education evaluate and improve their data management processes. The article emphasizes that high‑quality education depends on high‑quality data — and achieving that requires clear processes for data collection, validation, analysis, and reporting. It outlines a structured methodology that educational organizations can use to strengthen institutional capacity, improve decision‑making, and support long‑term transformation efforts aligned with global education goals.

I chose this article because it highlights how process management principles extend far beyond software development. Education systems face many of the same challenges we discuss in SPM: inconsistent workflows, unclear responsibilities, poor documentation, and difficulty scaling processes across teams. Seeing these issues in a non‑technical domain helped me appreciate how universal process thinking really is. I was also drawn to this resource because it connects directly to real‑world impact — improving data processes in education can influence policy decisions, resource allocation, and ultimately student outcomes.

What struck me most is how closely EMIS‑PATT mirrors the software process models we study in class. For example, its emphasis on iterative assessment and continuous improvement parallels Agile’s sprint cycles. Its focus on standardizing workflows resembles the structured phases of Waterfall or the disciplined practices of DevOps. Even the idea of strengthening “institutional capacity” reminded me of how development teams invest in tooling, documentation, and onboarding to improve long‑term productivity.

This article also reinforced the importance of process transparency. In education, unclear data processes can lead to inaccurate reporting, inequitable resource distribution, or students being overlooked entirely. In software engineering, unclear processes can lead to bugs, delays, and misaligned expectations. In both cases, the solution is the same: define the process, measure it, and improve it continuously.

UNESCO made me reflect on my progression through taking this Software Product Management course. It was one of the most exhilarating experiences I have ever had in an educational environment. Throughout the various activities I’ve partaken, I not only gained new skills, but I learned my strengths and weaknesses. I see process management as a foundation skill & this resource has showcased the power of processes throughout entire systems, not just for software teams.

Source: https://www.unesco.org/en/articles/education-management-information-systems-progress-assessment-tool-methodological-guide-educational

From the blog CS@Worcester – theJCBlog by Jancarlos Ferreira and used with permission of the author. All other rights reserved by the author.

REST API Best Practices

In this blog post, I will go over some of the best practices with REST API design, especially considering performance and security at must for most API consumers. It is worth notice that proper API design will ease the maintenance for many services and applications running on a web browser, as the worst case scenario will be more difficult to maintain and becomes different from what everyone expect.

In summary, the post introduces the crucial ways to design the REST APIs properly, which there are accepted conventions to follow so we won’t run into the problems down the road. Some example practices can be Accept and respond with JSON, where JSON is standard for transferring data, as well as built-in methods for Javascript to encode and decode JSON either through the Fetch API or another HTTP client. Other practices are usage of nouns instead of verb for endpoint path names, since it could conflict with HTTP method which are already the verbs, as well as many other practices such as handling error codes, maintain good security practices, and caching data to improve performance, and versioning the APIs.

The reason I choose this article is because there are many ways to experiments with REST API, with the aim to design a web application that meet consumers need, especially end-user experiences. In most cases, we want to deliver the up-to-date user experience within the web applications, and making sure to handle things that are unexpected. In other words, we need to keep track on features in our web applications, monitoring bugs that occur to many users, and monitor versioning appropriately according to the semantic versioning. In my experience, programming and maintaining a REST API project would take very careful steps to count on user experience, as well as what to and not to access. The best example is the good security practices, which are the process of obtaining SSL/TLS security. The good boundaries for the user is to not access any more information beyond what they expected, because when doing so, they could access into another user’s information, as well as information from the admins of the web server.

From what I learned from the article, applying these practices will help me learn more about making a clean REST API project, maintaining the project in the future long run so users will always have access to their needs. As the project grows, it will often requires a higher demand in terms of backend features, hence careful handling is needed to meet all the expectations overall.

Source: https://stackoverflow.blog/2020/03/02/best-practices-for-rest-api-design/

From the blog CS@Worcester – Hello from Kiet by Kiet Vuong and used with permission of the author. All other rights reserved by the author.

Blog Post for Quarter 4

Considering how the brief simulation of workflow was a recent assignment, I thought it’d be interesting to re-explore the waterfall workflow method. I found this article interesting due to the end. It emphasized potential benefits of aspects of the waterfall method. Instead of outright rejecting it, it highlighted some positives.

I think this is particularly interesting as it highlights the benefits and suggests pulling certain aspects out by suggesting hybrid styles. This also relates back to class by directly referring to agile methodologies. When referring to hybrid styles, they mention combining waterfall and agile together. I found this interesting in how there a likely many different places that use different workflow practices as well. This is also mentioned in the article as it does refer to companies choosing certain styles of workflow.

For example, it used waterfall’s extensive documentation and planning as a positive in some situations. It could be used to keep the agile method in a rough plan and sequence things more effectively. This was described to help maintain a long-term goal more concretely. By stating how the extensive early planning is, it tells of how it may be used in tandem with another style. This can lead to moments where the goal is still clear even if many short-term ones were met. In addition, it can have some flexibility by using agile’s dynamic and constant feedback to make adjustments. This could mitigate the huge commitment of waterfall.

I found this also interesting since I remember concepts of technical debt and clean code as well. Where clean code wants to reduce technical debt by taking more time to make code cleaner and more efficient to modify. I found this interesting in how both waterfall and agile could technically use it. Either by planning on certain things to build from the beginning, or opting for the modularity of relying on numerous functions. Alternatively, documentation could exist to inform on how code should be written, so long term projects are made more consistent. After all, changing too many things may lead to a lot of backtracking, so there has to be a purpose to cleaning up code in a certain way.

This is something that I should keep in mind for the future as I may need to adapt to different workflow styles depending on the company. This is also for personal use to see what flows I prefer in my work. The highlight of pros and cons also tells me I should look out for advantages and disadvantages for other styles as well. It makes things more situational and informs me to be more flexible and open-minded.

https://ones.com/blog/waterfall-lifecycle-computer-science-pros-cons/

From the blog CS@Worcester – Ryan's Blog Maybe. by Ryan N and used with permission of the author. All other rights reserved by the author.

Vue.js and How it Works

 

This article, An Introduction to Vue.js:
Understanding the Framework and Getting Started, is a good starting point for
people who want to use vue.js, a JavaScript framework that is used to make User
interface. It lights major features of vue.js; components, directives, instance
and router and how they work.  It then goes
into deeper topic of vue.js , stage management in vue.js from beforeCreate to destroyed,
introducing vuex which is the state management library. It explains the
structure of Vuex (state, mutations, actions, context) and demonstrates how
Vuex can be integrated with Local Storage and Session Storage to persist
application state across page reloads or browser sessions. It continues on to
reactions and even handlers, showing how computed values react automatically to
data changes and how event listeners allow components to respond to user
interaction. Last thing it explains about server-side rendering, what it does
and how it used to improve performance and SEO optimization This article concludes
stating that vue.js is a powerful and popular framework for web development,
offering a versatile and intuitive approach to building interactive and dynamic
user interfaces and simplifies the whole process of building and encourages
continued learning through documentation and community resources.

    I chose this article as I usually did my front-end work
based on html, CSS and JavaScript while learning how to use react(typescript). Vue.js
seems to decrease a lot of jobs of adding things, especially fact that it helps
with performance and Seo optimization route which must be taken in
consideration making the website. Also, with the fact vue.js is more
lightweight than other frameworks like angular and react and heard a lot about
ease of use, so I thought I need to give it a try and this article was the best
introduction of how vue.js works.  Ending
with some interesting state management library with different coding then other
ways. With the current knowledge that I got, I expect to apply this knowledge
by organizing my projects around reusable components, using lifecycle hooks
more intentionally for initialization and cleanup, and relying on Vuex when
managing complex state across multiple parts of an application. Not only that, I
plan on trying to make a website just using things I learned about vue.js in
this article and trying to compare with other plan html,css, javascript that
brings in library, another one built out of React typescript( a basic one that I
have built as test) based on performance, lightness, seo etc for the long run.

 

https://medium.com/@phamtuanchip/an-introduction-to-vue-js-understanding-the-framework-and-getting-started-d0ad0f3a6c01

 

From the blog Sung Jin's CS Devlopemnt Blog by Unknown and used with permission of the author. All other rights reserved by the author.