REST API Specifications

11/23/2024

In class this week we continued to learn about REST APIs, but this week we dove deeper into the source code. We went over the javascript files in the source code, and we also looked at the .yaml files.

We took a deeper look into the path endpoints in each endpoint file while looking at what each method asked for. Such that getGuest() for example had to make a request and also get a response. This method actually utilizes the “body” as a parameter to return a response. Whereas getOne() uses “param”, this is because when the endpoint needs a specific parameter to search, alter or even delete it will utilize wsuID as a parameter. Whereas the getGuest() retrieves all the guest which are stored in an array which will mean using “param” as our request will not work because we are not searching for one specific wsuID, but we want all the wsuID’s to be returned hence why “body” is used when sending a request.

We also learned how the mount-endpoint file did not need to be altered due to it just handling each endpoint the same. The endpoints only needed to be changed or added onto the endpoints path instead of the lib path. This is because they are handled the same way, but each do different things, hence why just adding a file to the endpoint path will be the best thing to do and nothing else would need to get done since the endpoint-mount already accesses the endpoint directory.

Also reading the article below showed me how the tags must be limited to those allowed by the JSON schema ruleset. It also shows how paths work in a sense such as /guest/{wsuID} which we also went over in class. If a wrong letter is entered it will return an error code either 400 or 404 depending on what the user’s input is. There is also for every method whether it is GET or CREATE endpoints each endpoint will have a wait method to check with the database if the wsuID is already in the database or not and if it is not then it will create it. Every endpoint also needs a method and a path then the body of the object, but it needs to be written in a try catch statement. This is very important due to feedback purposes for testing and such that if the server is down the user will get a 500 code or if everything is working well and the user inputted the correct information the code would be 200 or 201 depending on the method called. Each response code should return a message as well as what the error is for it to be identifiable. This is needed for each endpoint regardless of the case.

Source: OpenAPI Specification – Version 3.1.0 | Swagger

From the blog CS@Worcester – Cinnamon Codes by CinnamonCodes and used with permission of the author. All other rights reserved by the author.

REST API

Growing up, sometimes when I would Google search things, the page would not load and instead, it would give me a code, typically a 404. I never understood what it was or what it meant until recently. The 404 is a REST API response code, a code that the server returns when a web page or URL requests something. There are a bunch of codes, ranging from successful requests to malformed URLs to unstable connections to the servers. But there is more to them than just response codes.

In this blog post, the Postman Team talks about everything REST API related, including their history, how they work, their benefits, some challenges, and go over some examples. REST API uses resources, which can be a number of things, such as a document, an image, or multiples of them. REST is able to use an identifier to determine the type of resource being used in interactions. REST API uses methods, which is the type of request that is being sent to the server. These methods are GET, PUT, POST, DELETE, and PATCH. Each does something different from each other, allowing the user and the server to do a multitude of actions. GET does what the name suggests, it asks the server to find the data you asked for, and then it sends it back to you. DELETE deletes the specified data entry. PUT updates the specified entry, PATCH will do a similar thing. POST will add a new entry. With these methods, they return codes, describing what happened with the request. 200 is a successful response, 201 is a successful creation, etc. There are a number of codes, going from 100 to 599, each with a different response. REST API is flexible, allowing you to do more with them. REST API is used mainly for web use, but can also be used in cloud services and applications. The benefits of using REST API include scalability, flexibility and portability, independence, and lightweight. The challenges of it though are endpoint consensus, versioning, and authentication. The blog post goes into detail about all this in their post.

I chose this blog post because it did a good job of explaining everything about REST API. It even has a YouTube video listed in the post, which also explains what is in the blog post. APIs are used everywhere, so it is interesting to learn about something that is essentially a part of all computer related things. Although this is REST API related, there are a number of APIs, each with something different that they offer.

From the blog CS@Worcester – Cao's Thoughts by antcao and used with permission of the author. All other rights reserved by the author.

Understanding UML: A Simple Guide to the Unified Modeling Language

In 1997, the Object Management Group (OMG) introduced the Unified Modeling Language (UML). It was created to help IT professionals design and communicate software systems more easily. Think of it like blueprints for a building UML gives developers a standard way to plan and share their ideas about how a system should work.

UML has become a popular tool in the tech world. You’ll often see it listed on resumes, but many people don’t actually know how to use it well. That’s why learning the basics of UML is important if you want to include it in your skillset. In this guide, we’ll cover the article written by Donald Bell who works as a solutions architect for IBM, and some of the most common diagrams and how they’re used.

What Makes UML Special?

UML is not tied to a specific programming language. This makes it flexible and easy to use in many different environments, whether you’re working with Java, .NET, or something else. Also, UML is a language, not a method. This means it can fit into any company’s way of working without requiring big changes.

The main purpose of UML is to help teams understand and share their ideas more clearly. By using UML diagrams, teams can communicate how a system will work, making it easier for new members to join a project and get up to speed quickly.

Key Types of UML Diagrams

Use-Case Diagrams: These show how users (called “actors”) interact with the system. For example, they can illustrate how a customer logs into an app or makes a purchase. Use-case diagrams are simple and focus on the system’s main functions.

Activity Diagrams: These diagrams show the flow of actions in a process. They’re great for mapping out workflows, like how a customer service ticket moves from “open” to “resolved.” Activity diagrams are easy to understand, even for people who don’t have a technical background.

Deployment Diagrams: These focus on where parts of the system will run, like servers or applications. They show how different pieces of the system communicate and help teams plan how everything will work in real life.

Why UML Still Matters

UML has been around for over 25 years, but it’s still widely used because its core ideas are timeless. Much like classic software books that are still relevant today, UML helps solve problems that developers face every day.

Even without fancy tools, you can start using UML with just a whiteboard or pen and paper. By practicing with basic diagrams, you’ll improve how you share your ideas and work with others on software projects. Keep learning, and UML can become one of your most useful tools!

Reference

https://developer.ibm.com/articles/an-introduction-to-uml/

From the blog CS@Worcester – The Bits & Bytes Universe by skarkonan and used with permission of the author. All other rights reserved by the author.

Clean Code: The Foundation to Readable Organized Code

As I look back at my older projects and code, the lack of organization and structure is missing. I’m thankful for my detailed comments, because that code would have taken much longer to read. In our course, we went over the principles and practices associated with writing and abiding by “clean code” strategies. To deeper my knowledge on this matter I ended up finding an article called “How to Write Clean Code – Tips and Best Practices(Full Handbook),” by German Cocca. I trust this resource, it comes from freeCodeCamp. I have used this website in the past and I think it is a very useful source of free information. I also trust the author because he is a full stack developer. This comes from his own blog.

This article states clean code is more than just code that can run and function. Clean code should be very easy to read, understand, maintain overtime, and breakdown. His pilers of clean code are effectiveness, efficiency and simplicity. While the focus of codding should always be the functionality of the code, this should also be done so wile optimizing resource usage and while maintaining clarity.

The most important information I took from learning about clean code in the class room, was the idea that If you have to comment code, you didn’t write it clear or efficient enough. This also goes with functions, functions should be kept as small as possible. This kind of thinking helped me step back and re-evaluate my codding approach. Because of this I feel i write more readable efficient code now.

This article also goes over a very important idea that had a similar effect on my codding as the last. This is the idea of SRP or single respnibility principle. This means every clas or module should onley have one job. If you need to validate orders, calculate a total, save data,these should all be done in their own separate methods classes or functions. This makes the code much more readable and it makes implementing these functions easier.

So far these concepts have also ben dissgussed or touched upon in my class, The concept of modularization was not. This involves breaking down complex code into much smaller pieces. This makes the code easier to test and understand. This allows you to test maintain and read the code all more efficiently. Folder structure was also not talked about in my class. Folder structure is crucial for keeping a clean scalable codebase. The structure should keep related files together based on their functionality. For example instead of organising by filetype, you would organise by feature types. If every feature has its own place, it will be easier to go back and modify it.

I enjoyed looking into this site because It layed everything out in a nice organised manor. It explained everything briefly enough to maintain my interest, but was indepth enough where I was getting the information and knowledge I needed. This website also provided nice code  examples for everything it mentions. 

Reference:

https://www.freecodecamp.org/news/how-to-write-clean-code/ – How to Write Clean Code – Tips and Best Practices(Full Handbook) by German Cocca

Tags: CS@Worcester, CS-343, Week-11

From the blog CS@Worcester – SPM blog by Aaron Nano and used with permission of the author. All other rights reserved by the author.

The Most Useful Tool In a Developers Toolkit: Development Environments

Intro

Choosing a development environment to use is a decision that can be made based on feeling, or by taking the time to think out each choice and analyze which best fits your needs. Either way the environment that a developer uses is what is super important as it’s where all code is made in any project, making it the tool that every developer spends the most time using. It’s a personal choice and this blog by Matthew LeRay goes over everything you need to know about developer environments.

Summary of Source

This blog encompasses all that is needed to know about developer environments, including their purpose, importance, and what IDE’s are with some examples. The main sections are:

  1. Definition and Purpose: A structured setup of tools and processes that enhances software creation by automating tasks, supporting debugging, and ensuring consistency with production.
  2. Types of Development Environments: Explains the purpose and distinct roles of development, testing, staging, and production environments.
  3. Integrated Development Environment (IDE): The evolution of IDE’s and what they offer such as speed/efficiency and customizable features.
  4. Setting up a Development Environment: Goes through the steps of configuring your environment, from choosing the IDE, to configuring it and using tools like build automation.

The Reason I Chose This Source

For any new programmer, looking for an IDE to use can be confusing because of the lack of knowledge about what they even are, mixed with the daunting task of choosing one to learn and use. I chose this blog because it bridges that gap of being a new programmer having no idea what a developer environment even is, to choosing and setting up their IDE. It’s a very reader friendly resource, that even some experienced developers could learn from.

A Reflection of IDE’s

I personally use Visual Studio Code for the majority of what I program, but have used IntelliJ as well. I chose my IDE more based on appearance and general word of mouth, which is why I gravitated towards VS code, as it’s arguably the most popular and user-friendly IDE. I do like IntelliJ as it feels good to use, and although a drawback to others might be it’s a java only IDE, I only use java so that isn’t a problem for me. VS code also has a great variety of personalization options because of the extensions tab in the IDE. Extensions are great for not only appearance but also functional improvements. I think extensions are a big reason VS code is so popular, as well as its ability to support many languages, not restricting it to one the same way IntelliJ does. An IDE encompasses a ton of different tools a developer uses, so picking one that fits your needs is important. Becoming comfortable and familiar with the IDE you use is more important than switching to the “best” IDE for a developer based on some abstract metrics that others believe is the most important thing to have in an IDE.

My Future IDE Plans

I think I will continue to use VS code for now, but I can see myself trying out more technical and not-so user friendly IDE’s like Vim in the future. There really isn’t a need to switch it up if it’s working and honestly I don’t think it should be switched often. I will also probably utilize IntelliJ more as I do think it’s the best IDE for java which is the language I use most often.

Citation

Understanding Modern Development Environments: A Complete Guide by Matthew LeRay

From the blog CS@Worcester – The Science of Computation by Adam Jacher and used with permission of the author. All other rights reserved by the author.

SOLID Principles Made Easy

As my year progressed I wanted to better understand the basic principles of software design and sought out the SOLID principles after hearing about them throughout college. I had not yet put the time into properly learning and practicing them and came across a blog putting the principles into text simply and with excellent examples. The article’s title is The SOLID Principles of Object-Oriented Programming Explained in Plain English By Yiğit Kemal Erinç

The article begins by identifying what the SOLID principles are and their importance. These principles are five key guidelines for object-oriented design that help developers create maintainable, flexible, and understandable code. These principles are provided in great detail, with several coding examples and potential drawbacks or dangerous pitfalls. They are the:

  1. Single Responsibility Principle (SRP): A class should have one responsibility and one reason to change. By adhering to the SRP, you minimize complexity, avoid conflicting changes, and simplify version control and collaboration.
  2. Open-Closed Principle (OCP): Classes should be open for extension but closed for modification. This means new functionality can be added without altering existing code. This is often done by using interfaces or abstract classes. 
  3. Liskov Substitution Principle (LSP): Subtypes must be substitutable for their base types without altering the correctness of the program. 
  4. Interface Segregation Principle (ISP): Clients should not be forced to depend on interfaces they don’t use. This principle advocates for creating small, specific interfaces rather than large, general-purpose ones. It ensures that classes only implement methods relevant to their functionality, and there is less bloat and redundancy in the code.
  5. Dependency Inversion Principle (DIP): High-level modules should not depend on low-level modules, but both should depend on abstractions. This means This promotes flexibility and low coupling, making systems easier to extend and modify.

The key takeaway presented in the conclusion is that by following SOLID principles, developers can write cleaner, more testable, and scalable code.

The SOLID principles are essential and key learning blocks for building efficient code. This is a topic that has been addressed in class as design smells and patterns. By learning this through the lens of SOLID I was able to reconcile the importance of that topic in building good code. Through the CS-343 course, I was looking to understand those design patterns better, and by understanding them as SOLID principles I am able to better grasp them. I will be prepared to answer questions regarding SOLID principles in context thanks to this article in combination with the class material.

Source:https://www.freecodecamp.org/news/solid-principles-explained-in-plain-english/

From the blog CS@Worcester – WSU CS Blog: Ben Gelineau by Ben Gelineau and used with permission of the author. All other rights reserved by the author.

The Hidden Backbone of Software Development: Software Documentation

Intro

In software development, documentation often takes a backseat to coding, testing, and deploying. However, software documentation is the backbone of more easily maintainable, scalable, and collaborative projects. This blog post by David Oragui gives useful information on why documentation is essential and how it supports both developers and end-users throughout the software lifecycle.

Summary of Source

The blog post explores the role of software documentation and offers practical advice on creating and maintaining it effectively. The main sections are:

  1. What is Software Documentation?: A definition and explanation of how it serves as a guide for developers, users, and stakeholders, providing clarity on a system’s functionality and usage.
  2. Types of Documentation: A breakdown of key categories, including user documentation, developer guides, technical documentation, and process documentation.
  3. Best Practices for Writing Documentation: Practical tips, such as structuring content logically, using plain language, and keeping documentation up-to-date.
  4. Using Software Documentation Tools: The different type of documentation tools and the reasons to consider them including automation, collaboration, and accessibility

Why I Chose This Blog

I selected this blog because it is a concise resource that explores all there is to know about documentation, making it a great guide to refer back to when needed. In my coursework, the focus has largely been on coding, but I’ve noticed that a lack of proper documentation can make even the best-written software hard to use and maintain. This blog stood out for its clear and actionable advice, which is especially valuable as I create projects and prepare for internships.

Reflection

The blog’s structured approach to explaining software documentation makes it great as an introductory resource. One section that particularly stood out was the breakdown of the different documentation types. It clarified the different audiences for documentation end-users, developers, and stakeholders, and how each requires tailored content. For example, user documentation should be simple and accessible, while developer guides need to be more technical and detail-oriented.

This difference in target audiences was an eye-opener, even though it does seem obvious when it’s said. There’s no reason to have documentation about technical details that an end-user will see because they won’t understand it or even need to. Whenever I thought about software documentation before, it was just a one size fits all document that explained the technical details of the software.

Another valuable takeaway was the emphasis on keeping documentation up to date. It made me consider the consequences of never updating documentation and having incorrect information that could end up causing a lot of trouble. When the point of documentation is to make it easier for all parties to understand a project, outdated docs would contribute to the very problem it’s trying to solve.

Future Application

Moving forward, I plan to apply these best practices to my projects. I will make sure to make different documentation resources for different target audiences, also making sure to update the docs every time a change is made to make sure there is no confusion with outdated information. Software Documentation simply makes the whole process of understanding and maintaining code a lot easier so there is really no downside to adding quality documentation to a project.

Citation

Software Documentation Best Practices [With Examples] by David Oragui

https://helpjuice.com/blog/software-documentation

From the blog CS@Worcester – The Science of Computation by Adam Jacher and used with permission of the author. All other rights reserved by the author.

Semantic Versioning or Updating Numbers?

Versioning is an important role in development, as we know it offers a clear framework for tracking changes and updates to code. One system that has become widely adopted is semantic versioning, which provides a standardized approach to naming software releases. For this blog entry, I chose to review the AWS blog post on semantic versioning. This resource offers a detailed exploration of how Semantic Ver. simplifies release management and how it ensures consistent communication among developers, operations teams, and users.

The Article

The article introduces semantic versioning as a three-part versioning system—MAJOR.MINOR.PATCH (e.g., 2.3.1), this is where each segment is specific to types of changes:

  • MAJOR: Incremented when incompatible API changes occur.
  • MINOR: Updated when new features are added in a backward-compatible way.
  • PATCH: Increased when bug fixes or minor changes that don’t affect the API are implemented.

The blog highlights how this system helps manage software lifecycles by making the scope of changes transparent, which supports with better collaboration between teams. It also emphasizes the importance of tagging releases in version control systems like Git to align codebase changes with their respective version numbers.

The article also showcases real-world applications of semantic versioning in continuous integration/continuous delivery (CI/CD) pipelines. For example, tools like AWS CodePipeline and CodeDeploy can use version tags to automate deployment processes, ensuring consistency and reducing the likelihood of errors.

Why I Chose This Resource

In class, we learned about the importance of versioning in software projects and how clear communication of changes can prevent confusion, particularly in team environments. Semantic versioning was introduced as a standard approach, but I wanted to explore its real-world applications further. This AWS blog post stood out because it not only explained SemVer but also demonstrated its practical use in release management and CI/CD pipelines, areas that wasn’t relevant to the current coursework.

Personal Reflections

The resource clarified how semantic versioning improves team collaboration by setting clear expectations about software updates. For instance, knowing that a MINOR update is backward-compatible or that a MAJOR update might require significant adjustments removes ambiguity for developers and users.

One aspect that particularly resonated with me was the integration of semantic versioning into CI/CD workflows. The article’s example of automating deployments based on version tags helped me understand how practices can streamline release management, reducing manual errors and accelerating delivery timelines. I had not considered how tools like AWS CodePipeline could interact with semantic versioning to achieve this level of efficiency…

Future Practice(s)

I plan to adopt semantic versioning in all future projects, starting with any academic group work. For example, using version tags in Git will help my team better manage changes and understand the implications of updates. Additionally, I want to experiment with automating deployments using CI/CD tools like GitHub Actions or AWS CodePipeline, as the article suggests.

Whether contributing to open-source projects or collaborating in a work environment, semantic versioning will help me communicate changes clearly and maintain quality and control.

Conclusion

The AWS blog post brought up the importance of semantic versioning as a tool for simplifying release management and fostering collaboration. I think this not only deepened an understanding, but also inspired me to integrate these practices into my current and future workflows. Semantic versioning is more than just a numbering system I think it’s a critical framework for ensuring clarity, consistency, and efficiency in software development. Thank you for reading my blog!

https://aws.amazon.com/blogs/devops/using-semantic-versioning-to-simplify-release-management

https://www.geeksforgeeks.org/introduction-semantic-versioning/

From the blog CS@Worcester – function & form by Nathan Bui and used with permission of the author. All other rights reserved by the author.

Transparency and Autonomy: Better Together

In continuing my research on team management strategies, I delved deeper and more specifically into the Software Development side of team management. In doing so I discovered the scrum.org blog, which has many different articles based on understanding Scrum. Two of the most important principles in Scrum are transparency and autonomy, and I wished to delve deeper into understanding how to achieve those in a team setting. The article I found explained how those two play into each other greatly. The article’s title is Transparency and Autonomy: Two Sides of the Same Coin by Sanjay Saini

The article begins by explaining agile’s fast-paced style of producing working code. How autonomy and independence can be essential for fast results. The article explains that in Agile, teams seek this autonomy to make decisions and deliver value without excessive oversight. Transparency can become essential for fostering this autonomy. The article explains that by making work visible, tracking progress, and openly addressing challenges, more autonomy and trust can be given. The article highlights five key points to help allow for this trust and efficiency:

  1. Visibility Creates Trust: By sharing progress and challenges during Scrum events like Daily Scrums and Sprint Reviews, it shows that the team is accountable and can be trusted to be autonomous.
  2. Transparency in Challenges Leads to Solutions: Being open about struggles encourages collaboration and problem-solving, proving the team can manage setbacks and seek out help when they need it independently.
  3. Data-Driven Transparency Builds Confidence: Using metrics like velocity and burndown charts shows consistent results, building leadership confidence in the team’s capability.
  4. Transparency Causes Better Decision-Making: When a team has full visibility into goals, priorities, and feedback, they can make informed decisions independently. Information needs to be freely shared for autonomy to occur, and good decisions.
  5. Open Communication Builds Long-Term Autonomy: Regular, open communication about decision-making processes helps cultivate trust and secure more autonomy over time, as the team can continue to build trust through constant demonstration of these values.

The article concludes by saying that, transparency creates a culture of trust and accountability, enabling Scrum teams to earn the autonomy needed to make decisions and drive value.

This article helped me understand the importance of these values to a Scrum team’s operation. This is a key step in understanding Scrum’s importance in the operation of a team, as things such as transparency can make a smoother work environment for everyone by providing autonomy. Next in my blog, I will look into articles relating to development environments such as Docker or GitPod and their importance for maintaining a productive team.

Source:

https://www.scrum.org/resources/blog/transparency-and-autonomy-two-sides-same-coin

From the blog CS@Worcester – WSU CS Blog: Ben Gelineau by Ben Gelineau and used with permission of the author. All other rights reserved by the author.

Merge Conflicts

I believe that using systems like Git are an important tool for developers. Yet, one of the more challenging aspects of working with Git is resolving merge conflicts, an occurrence in collaborative projects. For this blog entry, I chose to review the Graphite guide on resolving merge conflicts. This resource provided a clearer, step-by-step approach to handling merge conflicts, and I found it both insightful and practical after learning it through homework and in class.

Guide of Merge Conflicts

The guide explains the basics of merge conflicts in Git, outlining what they are and why they occur. It details the types of conflicts, these arising from edits in the same line of code or overlapping changes across different branches. Resolving these conflicts using Git commands like git status and git diff to identify issues and git merge to bring changes together. The guide concludes taht with best practices to prevent merge conflicts, such as pulling the latest changes regularly, using feature branches, and maintaining clear communications within a team.

Why I Chose This Resource

I chose this resource cause it was a little confusing at first. After reading/researching multiple articles and websites like this one it refreshens your knowledge. Now I know that merge conflicts are a just not a concept we’ve discussed, but we learned about the importance of version control in collaborative coding environments. We learned how tools like Git enable teamwork by allowing simultaneous contributions, but we also explored how conflicts can arise when changes overlap. Despite this, this has to be one of the most stressful aspects of group projects.

Personal Reflections and Insights

Reading this guide helped de-reconstruct merge conflicts. I particularly liked the detailed explanations of the commands, as it’s easy to misuse or misinterpret them when under pressure or when you are clueless. While I’ve often focused on “fixing the conflict,” I’ve ignored on verifying how the changes interact, which has caused issues in past projects.

Another valuable takeaway I think was the important of adopting preventive measures. In class, we learned about best practices like pulling changes frequently and using feature branches, but this guide provided additional context that made these tips feel more somewhat actionable.

Future Practice

I want to apply this knowledge in upcoming group projects. Whether working on a shared repository for class or contributing to open-source projects, knowing how to resolve merge conflicts efficiently will save time and reduce confusion. This guide also inspired me to explore additional tools, like Visual Studio Code’s merge conflict interface, to streamline the process further. By combining these technical skills with teamwork, it will be better prepared to contribute effectively in collaborative environments understanding resolving merge conflicts.

https://graphite.dev/guides/how-to-resolve-merge-conflicts-in-git

From the blog CS@Worcester – function & form by Nathan Bui and used with permission of the author. All other rights reserved by the author.