A look at REST API designing

Hello, for this quarter’s blog I read “The Day My Role-Based REST API Design Collapsed (and What I Learned)” by Sidharth Devaraj. Per the title, the blog talks about REST APIs, specifically Sidharth’s experience with them when he was new to backend development, and his reflections on a key mistake he made when designing his first REST API. To quickly summarize what that is; per GeeksForGeeks a rest API “…is a type of API (Application Programming Interface) that allows communication between different systems over the internet. REST APIs work by sending requests and receiving responses, typically in JSON format, between the client and server”. Following the theme of my previous blog, this is a topic pertinent to what we have recently been doing in class, and sometJhing I would like to see a human perspective on to gain insight about how I may utilize it in the future. 

Sidharth starts by explaining that he needed to make an API for an application he was working on. He mentions that when he was initially planning things out, he decided that he wanted to focus on keeping things organized and easy to understand. To accomplish this, his API was categorized into something that he, at the time, thought was pretty smart: user roles. These different roles would then act as routes for items to be used by those roles, i.e. /admin/… would host administrative objects, and /customer/… would host items used by a customer. At first glance this seems like it could be a pretty good idea, and Sidharth was pretty happy with what he thought up. However, he didn’t account for a crucial situation; what if both admins and customers both needed to interact with the same resources? It simply wouldn’t be feasible. The more resources created, the more issues arise. He ends his blog by pointing to the importance of a key design principle of REST: focus on resources, not roles. 

At first I thought this was a pretty bad mistake. If he had only thought a bit about how a role can’t be used by two different groups, he wouldn’t have made it in the first place. But then I thought about something; what if when initially building the application, none of his resources were needed by the two different groups? If he was testing things, in that situation he would never run into any issues, and wouldn’t be made aware of the flaws of his design. Only once the API started scaling did it become prevalent. I initially thought the blog was a pretty good read, but nothing special. It was more or less just “follow the design principles” without much more substance than that. However, reading into it a bit more it also touches on the importance of future-proofing your designs. Initially, Sidharth’s API worked great. Had his requirements not changed, he would have accomplished his goal of keeping it organized and easy to understand. However, as those requirements changed, new issues that weren’t present before arose. Scaling your work isn’t something I really think about much, when working on projects I really only think about how they can accomplish the present goal. But, in a professional environment, requirements will never be static, and this is something I really should get into my head. This blog definitely reminded me of that. 

From the blog Joshua's Blog by Joshua D. and used with permission of the author. All other rights reserved by the author.

Managing a Software Development Team

Guide to Successful Software Development Team Management by Aimprosoft

For this blog, I chose to write about Team Management since its been a huge focus in both my CS-343 and CS-348 classes this semester. Since the beginning, we’ve been working in groups on in class activities so reading about this topic was something I could relate to.

The blog is an interesting and helpful guide on how to lead software teams effectively. It begins by explaining the importance of having good management, it helps complete projects on time, keeps project within budget, and overall makes the team more productive and happier. Overall, what stood out to me was the idea that having successful team management is not about one person controlling everything. Instead, it involves everyone, planning, and tools rather than just having technical skills.

Many things that I learned from this blog connected to stuff we’ve discussed in class such as Agile, Scrum, and Kanban. Seeing these methodologies being explained in a real world guide made it a lot easier to compare the differences. It explains that Kanban doesn’t use time boxes like sprint cycles that Scrum uses. Instead it focuses on visualizing everything and limiting how the number of things that are in progress, and keeping work flowing steadily. It notes that Kanban is great for teams that get more incoming requests, whilie Scrum is better when you have clear goals and a stable team. I also learned that some teams even mix both approaches.

What I took from this blog was how important it is to have open and constant communication. I used to think that having a successful team would require expert coders to lead but that isn’t the case. In our class and from this blog it’s shown me that communication, strong organization, and planning is far more important. It motivated me to start improving on these skills myself as it’s just as important as being a great programmer for this career. In group projects, I plan to contribute more, making sure that everyone is on task, as well as everyone giving their opinions. Improving these skills right now will definitely help me advance through my career.

From the blog CS@Worcester – wdo by wdo and used with permission of the author. All other rights reserved by the author.

We should Refactor our code

Code Refactoring and why should refactor your code by Lazar Nikolov

For this blog post, I chose to write about Code Refactoring. Nikolov starts off by expressing the idea that software does not expire, but it rots over time. The program continues to work, but as we add quick fixes, messy code, and shortcuts, it begins to become harder to work with. What is refactoring? In simple terms, refactoring is the process of cleaning and improving code without changing what the program actually does. To do this we must find code smells and technical debt and then begin removing them.

Code smells are not bugs that causes problems to your program. Instead they are warning signs that something is wrong with the design even though the code runs. Some examples of this can be having very large functions that try to do more than it should, confusing names, and complex logic that is hard to follow. Code written this way is often closely tied together. This could cause changing one small thing to break another part.

Technical debt is what happens when we choose the fast and easier solution like a shortcut to fixing a problem. This happens when we’re rushing to meet a deadline or don’t have coding standards. As a result we pay the price later with extra work since the code is hard to fix and improve on.

When should you refactor your code? Some signs are repeated code, having many bugs, or you are unable to add a new feature since the existing code is hard to work with. This relates to a term we’ve learned in class known as DRY (Don’t Repeat Yourself) and AHA (Avoid Hasty Abstractions). DRY says we should not repeat the same logic many times in our code. AHA reminds us we shouldn’t create abstractions early on.

This blog made me realize how much I relate to this topic. As someone who can say they often procrastinate and usually does assignments at the last minute. I’ll write quick code that “just works” so I can finish my assignments. I rarely think about how I can improve or add to it later. But I now understand that these shortcuts will definitely turn into technical debt. Moving forward, I’m going to watch for code smells more often as I work on projects. My goal is to write more optimized, cleaner code that would be better in the long run.

From the blog CS@Worcester – wdo by wdo and used with permission of the author. All other rights reserved by the author.

REST API Best Practices

In class, we have continued the usage of REST API, and we will only continue to use it. Through homework assignments and class activities, I’ve gotten more comfortable with REST API, but will need to have a good understanding of it as it is a vital part of my capstone project next semester. I found this blog of Stack Overflow that listed some best practices for REST API design that I was previously unaware of or did not have full knowledge of. 

Nest Endpoints That Contain Associated Info

If your database has objects that contain other objects, it is a good idea to reflect that in the endpoints. An example from guestInfoBackend would be “/guests/:uuid/assistance/”. This URI is used to access the “assistance” object inside a specific guest. But note that having multi-level nests can get out of hand. A bad example would be having an endpoint that looks like: /articles/:articleId/comments/:commentId/author. It is better to use the URI for the specific user within the JSON response as follows: “author”: “/users/:userId”.

Return HTTP Response Codes Indicating What Kind Of Error Occurred

HTTP response codes help to eliminate confusion when an occurs. Response codes give the API maintainers enough information to understand the problem that has occurred. The blog also showed some other common codes that were not discussed in class:

  • 401 Unauthorized – user isn’t not authorized to access a resource; usually returns when the user isn’t authenticated.
  • 403 Forbidden – user is authenticated, but not allowed to access a resource.
  • 502 Bad Gateway – invalid response from an upstream server.
  • 503 Service Unavailable – something unexpected happened on the server side; this could be anything from server overload to some parts of the system failing.

Messages also need to be attached to response codes so that maintainers have enough information to troubleshoot the issue, and that attackers can’t use any of the error content to launch attacks.

Maintain Good Security Practices

The communication between client and server should be private. A good way to secure your REST API is by loading an SSL/TLS certificate onto the server. They are very low cost or even free to use, so it is a no-brainer to strengthen security. It is also a good idea to apply the principle of least privilege. Each user should have role checks or more granular permissions. The admin should not have an issue adding and/or removing permissions and roles from users.

Version APIs

In order to prevent clients from being broken while making changes to the API, different versions of the API should be available. This way, old endpoints can be phased out instead of forcing everyone to switch over to the new version at the same time. This is important for public APIs and is how most apps today handle making changes.

Source

https://stackoverflow.blog/2020/03/02/best-practices-for-rest-api-design/

From the blog CS@Worcester – Blog del William by William Cordor and used with permission of the author. All other rights reserved by the author.

Refactoring Code

When looking for some ideas with what to write about for this blog post, I settled in on refactoring code. For starters I know what it is and why we do it, but I wanted to go more in depth by looking at other resources and what they have to say about it. One of the first sources I came across was a blog post called “How to create a culture of continuously refactoring code?” by Stas Wishnevetsky. This post from Medium revealed something that for some reasons wasn’t so obvious to me and this was that refactoring isn’t a one and done kind of thing. Instead, it is more of a routine and should be done as one to keep code maintained properly and to make sure it remains easy to use.

The first thing that really struck me when reading this article was talking about code like it is something physical and biological. This comparison was made to show how code can “decay” and “rot”. While the code itself doesn’t break down and fall apart with time, there are multiple reasons that make it seem like it does. These reasons include: coding abilities improving over time, business needs and scale fluctuating, and deliberate tech debt. The article then goes on to explain why we should refactor and the end goal of maintaining stability and creating improvement.

I guess I have never had the need to deal with refactoring my code since most the time I program one time assignments and never had the need to upkeep some kind of program. I have, however experience my abilities improving while creating a project and have had some parts be sloppy while later improvements are better done. I suppose I should have gone back and refactored those early parts but the bottom line is that I’ve never had the need to. I would also run into problems trying to make small changes which would be risky in my program. This is one of the many pains explained in the article I read and makes lots of sense.

I know going forward in my career refactoring will become very prominent in my work and understanding it more now makes me feel better about it. This also really shows the amount of effort that is put into big programs we use on a day to day basis. Starting the practice of constantly and consistently refactoring my code, even simple projects, will be super beneficial for me going forward.

From the blog CS@Worcester – Works for Me by Seth Boudreau and used with permission of the author. All other rights reserved by the author.

The Importance of Keeping Code Clean.

After reviewing its meaning and apply the practice in my class, I wanted to search for a good source on clean code principles and some more reasons why it is important. i came across and read an article from the Codacy blog called “What is Clean Code? A Guide to Principles and Best Practices.” This article really went into depth on what it means to have your code be clean and have it makes to not only you, but other people as well. The link to the original post is right here: https://blog.codacy.com/what-is-clean-code

One of the first things this blog states is “writing code is like giving a speech. If you use too many big words, you confuse your audience. Define every word, and you end up putting your audience to sleep.” This very starting statement is what drew me to this post. Thinking about it like this makes so much sense. I believe I am pretty decent at public speaking myself so relating code it really helps define clean code to me. This post contains nine different methods on how to do this such as following code-writing standards and refactoring continuously. The article also explains how clean code helps development teams: teamwork is way easier, debugging becomes more straightforward, and overall it helps prevent many mistakes.

In class we’ve gone over many concepts this post explains, such as DRY(don’ repeat yourself), using comments sparingly, and writing short, straight forward functions. The most backwards concept for me is the comments. In past classes many professors have told me to write lots of comments explaining what certain things do. Now I’m being told to try not to use any? It does make sense, focus on making your code readable so you don’t need any comments.

Going forward for myself, making code simply and understandable will be a main focus for me. Keeping things simple and understandable makes way more sense than making code I need comments in. Another big thing for me is eliminating repetition. Using functions and calls instead of writing the same code over and over again will one hundred percent make my computer science life way easier and save me tons of time as well. I’ve only just started writing code in teams so I’ve only ever written code for my professors or just for myself. I know that keeping this readable will prevent meetings with teammates and hours explaining what the code I’ve created does.

From the blog CS@Worcester – Works for Me by Seth Boudreau and used with permission of the author. All other rights reserved by the author.

Professional Insights on Software Maintenance

Hello everyone, and welcome to my latest and most likely my last blog entry of the semester!

For this week’s self-directed professional development, I read “The Ultimate Software Maintenance Guide” from Mayura Consultancy Services. This article offers a thorough, business‑oriented perspective on software maintenance, covering everything from types of maintenance to common challenges, processes, and best practices. It helped me prepare mentally and technically for the maintenance project I’m about to begin at The Hanover Insurance.

Summary of the Article

The Mayura Consultancy guide begins by defining software maintenance as a continuous process of modifying and updating software to correct defects, improve performance, and adapt to changing business needs.

It breaks down four main types of maintenance:

  1. Preventive maintenance – proactively identifying and resolving potential issues before they escalate.
  2. Corrective maintenance – fixing bugs and defects reported in production.
  3. Adaptive maintenance – adjusting the software to work with new operating systems, regulations, or business requirements.
  4. Perfective maintenance – enhancing performance or usability, or adding new features.

The article also outlines the key challenges maintenance teams face: dealing with legacy code, managing complexity, controlling costs, and allocating time. It spells out a maintenance process that includes issue identification, evaluation, planning, implementation, testing, and deployment.

Mayura then describes four maintenance models—like the Quick-Fix Model, Iterative Enhancement, Reuse-Oriented, and Boehm’s Model—each suited for different business contexts. The article finally offers best practices, which include maintaining documentation, collecting and analyzing data, performing regular testing, monitoring performance, and sharing knowledge within the team. They close by discussing the benefits of maintenance: extended software lifespan, better security, cost savings, and increased user satisfaction.

Why I Chose This Resource

I selected this article because I’m about to begin my first maintenance project at The Hanover Insurance Group. Knowing effective maintenance practices will be crucial as I work on updating and sustaining existing software. This piece from Mayura Consultancy provides both a high-level strategic vision and practical, actionable steps — exactly what I need for a professional maintenance context.

Also, in my software engineering classes, we emphasize design, clean code, and testing. This article helped me see how those principles carry over into maintenance: not just writing code once, but caring for it over time.

Personal Reflections: What I Learned and How It Connects to Class

This article emphasized to me that software maintenance is not just reactive — it’s a proactive, vital phase in the software lifecycle. From class, I understood design patterns and modular structures, but Mayura’s guide made me realize that those good design decisions pay dividends later, when maintenance is needed.

The breakdown of different maintenance types (preventive, adaptive, etc.) was especially eye-opening. It taught me to think about maintenance not only as bug-fixing but also as a strategic activity that aligns with business change.

I was particularly struck by how essential good documentation is. In class, we talk about commenting code and writing clear functions — but in maintenance, that clarity supports future developers and reduces downtime.

Application to My Future Practice

In my maintenance work, I plan to:

  • Use Mayura’s structured maintenance process (identification → planning → implementation → testing → deployment) whenever I handle a bug or feature update.
  • Prioritize preventive maintenance so I’m not always on the back foot.
  • Keep documentation up to date — not just in code, but in architecture, logs, and change-tracking.
  • Advocate for regular testing cycles (unit, regression) and performance monitoring.
  • Share my learnings with teammates, so we maintain a shared, growing knowledge base.

Citation / Link
Rao, Ashwin Kumar. “The Ultimate Software Maintenance Guide: Tips, Tricks, and Best Practices.” Mayura Consultancy Services, updated November 1, 2025. Available online: https://www.mayuraconsultancy.com/blog/ultimate-guide-to-software-maintenance

From the blog Rick’s Software Journal by RickDjouwe1 and used with permission of the author. All other rights reserved by the author.

Blog Post 3

https://www.freecodecamp.org/news/permissive-and-protective-software-licenses/

For this blog entry I’ve decided to dig into the world of software licenses. In class we went over quite a few different licenses like GPL, MIT, and Apache, but just looking through the https://www.tldrlegal.com/browse site, it’s clear we only scratched the surface. That’s not even including the fact that from what I understand anyone can make up their own license with a word doc and lawyer, and the list of different licenses just grows. Yet despite that all licenses have a common goal, which is to explain who can do what with somebodies work and what they can do with it. This is a very important aspect to software development, which I never really considered as something I would have to think too deeply on, mainly because legal matters aren’t really my expertise. Because of that I wanted to find a blog post that could break things down in a way even someone like me could understand.

In my search I found How Do Open Source Licenses Work? Permissive and Protective Software Licenses Explained written by David Clinton. In the post David broke the types of licenses into two categories, permissive and protective.

On the permissive side we have licenses like MIT and Apache. With these types of licenses they basically let people do almost whatever they want. The article puts it pretty clearly: permissive licenses “give you the right to use the software for any purpose – including commercial purposes – and the right to modify the software to suit your needs.” For someone like me who prefers simple, straightforward rules, this one seemed the one I’d be most compatible with.

Then we have protective licenses, which David also referred to as restrictive licenses. These are the copyleft licenses like GPL that we went over in class. Similar to permissive licenses they give the users the right to use, modify, and distribute the software, but with the extra conditions that said software must remain free and open source now and forever. Considering this I can understand why David would call these types of licenses restrictive, but at the same time a part of me understands and appreciates someone who would choose this type of license.

After reading this blog post I definitely got a better understanding of the different types of licenses and David did a good job in breaking it down to the essentials. The thing though is that while it answered the question on how licenses work, I am now posed with the question of which type I’d choose. On the one hand I like the flexibility of permissive licenses and the fact that you can either share or sell your work. On the other hand protective licenses seem to prioritize “the little guy” and keeping improvements in the open, which I like, but I also know if I put in the work of making improvements I would like the option to get paid, though I guess that says more about me. So much too think about.

From the blog CS@Worcester – CS Notes Blog by bluu1 and used with permission of the author. All other rights reserved by the author.

Blog Post #4

Title: Building Secure Web Applications

Blog Entry:

This week, I developed the issue of web application security- a growing serious field in the software development. With the growing interconnectedness of applications and the increasingly data-driven nature of the application development process, the importance of user information and system integrity is equal to the one of the functionality or performance. The subject is related to the course goals related to the design of systems, software quality, and secure coding practices.

During my research, I paid attention to the general weaknesses that programmers have to deal with, including cross-site scripting (XSS), SQL, and insecure authentication systems. Such weaknesses are usually brought about by a failure to look into security requirements at the initial design phase. As an illustration, the inability to check input correctly may enable attackers to inject bad codes or access classified information. Security by design is based on the idea that protection must be implemented at each stage of development instead of viewing security as an a posteriori.

During my research, I paid attention to the general weaknesses that programmers have to deal with, including cross-site scripting (XSS), SQL, and insecure authentication systems. Such weaknesses are usually brought about by a failure to look into security requirements at the initial design phase. As an illustration, the inability to check input correctly may enable attackers to inject bad codes or access classified information. Security by design is based on the idea that protection must be implemented at each stage of development instead of viewing security as an a posteriori.

I also reviewed the industry best practice of enhancing application security. The common attacks are prevented with the help of techniques like the parameterized queries, the enforcement of the HTTPS protocol and encryption of the sensitive data and the use of the secure authentication frameworks. Periodical code inspection, automated testing, and standard compliance, such as the Top Ten guide by the OWASP, make code developers responsible to the creation of more robust systems. I was also informed that a healthy security culture in a development team, wherein the whole team takes the responsibility of securing the data of its users, is as valuable as any technical measures.

This subject matter was echoed in our discussions in the classroom on software reliability and maintainability. Secure code is just like clean code in that the code will be used over a long period. I was intrigued by the fact that the same principles of design made it more secure such as the principles of clarity, simplicity, and modularity. A well-organized system, which is simple to audit, has fewer chances of concealing undetectable weaknesses.

Reflection:

This study has made me understand that the need to develop applications that are secure is not just a technical one, but also a moral obligation. The developers should be able to consider the risks and the safety of users in advance. Security should not be at the expense of usability but rather it should complement usability to produce software that the user can trust. This attitude has motivated me to follow safe coding practices early in my work which includes validating inputs, data handling and sound frameworks.

In general, this discovery broadened my perspective on contemporary software design to include aspects of performance and functionality. Security is a key component of quality software engineering like never before. With these principles combined, I am more confident that I will be able to create applications that are efficient and scalable, besides being user-safe in the ever-digitized world.

Next Steps:

Next time, I will test some security orientated tools in the form of penetration testing systems and auto vulnerability scanners. I will also consider reading more on OWASP guidelines as a way of enhancing my knowledge on emerging threats and mitigation controls.

From the blog CS@Worcester – Site Title by Yousef Hassan and used with permission of the author. All other rights reserved by the author.

Blog Post #4

Title: Building Secure Web Applications

Blog Entry:

This week, I developed the issue of web application security- a growing serious field in the software development. With the growing interconnectedness of applications and the increasingly data-driven nature of the application development process, the importance of user information and system integrity is equal to the one of the functionality or performance. The subject is related to the course goals related to the design of systems, software quality, and secure coding practices.

During my research, I paid attention to the general weaknesses that programmers have to deal with, including cross-site scripting (XSS), SQL, and insecure authentication systems. Such weaknesses are usually brought about by a failure to look into security requirements at the initial design phase. As an illustration, the inability to check input correctly may enable attackers to inject bad codes or access classified information. Security by design is based on the idea that protection must be implemented at each stage of development instead of viewing security as an a posteriori.

During my research, I paid attention to the general weaknesses that programmers have to deal with, including cross-site scripting (XSS), SQL, and insecure authentication systems. Such weaknesses are usually brought about by a failure to look into security requirements at the initial design phase. As an illustration, the inability to check input correctly may enable attackers to inject bad codes or access classified information. Security by design is based on the idea that protection must be implemented at each stage of development instead of viewing security as an a posteriori.

I also reviewed the industry best practice of enhancing application security. The common attacks are prevented with the help of techniques like the parameterized queries, the enforcement of the HTTPS protocol and encryption of the sensitive data and the use of the secure authentication frameworks. Periodical code inspection, automated testing, and standard compliance, such as the Top Ten guide by the OWASP, make code developers responsible to the creation of more robust systems. I was also informed that a healthy security culture in a development team, wherein the whole team takes the responsibility of securing the data of its users, is as valuable as any technical measures.

This subject matter was echoed in our discussions in the classroom on software reliability and maintainability. Secure code is just like clean code in that the code will be used over a long period. I was intrigued by the fact that the same principles of design made it more secure such as the principles of clarity, simplicity, and modularity. A well-organized system, which is simple to audit, has fewer chances of concealing undetectable weaknesses.

Reflection:

This study has made me understand that the need to develop applications that are secure is not just a technical one, but also a moral obligation. The developers should be able to consider the risks and the safety of users in advance. Security should not be at the expense of usability but rather it should complement usability to produce software that the user can trust. This attitude has motivated me to follow safe coding practices early in my work which includes validating inputs, data handling and sound frameworks.

In general, this discovery broadened my perspective on contemporary software design to include aspects of performance and functionality. Security is a key component of quality software engineering like never before. With these principles combined, I am more confident that I will be able to create applications that are efficient and scalable, besides being user-safe in the ever-digitized world.

Next Steps:

Next time, I will test some security orientated tools in the form of penetration testing systems and auto vulnerability scanners. I will also consider reading more on OWASP guidelines as a way of enhancing my knowledge on emerging threats and mitigation controls.

From the blog CS@Worcester – Site Title by Yousef Hassan and used with permission of the author. All other rights reserved by the author.