Category Archives: Week 7

Why API Testing Matters: Ensuring Robust Software Performance

The blog post discusses why developers should use API testing and how it is becoming increasingly important, particularly as microservices architecture gains popularity. This technique necessitates that application components work separately, each with their own data storage and commands. As a result, software components can be updated fast without disrupting the whole system, allowing consumers to continue using the application flawlessly.

Most microservices are based on application programming interfaces (APIs), which specify how to connect with them. APIs usually use REST calls over HTTP to simplify data sharing. Despite this, many testers still rely on user interface (UI) testing, particularly using the popular Selenium automation tool. While UI testing is required to ensure interactive functioning, API testing is more efficient and dependable. It enables testers to edit information in real time and detect flaws early in the development process, even before the user interface is constructed. API testing is also important for identifying security flaws.

To effectively test APIs, it is critical to understand the fundamentals. APIs are REST calls that retrieve or update data from a database. Each REST request consists of an HTTP verb (which specifies the action), a URL (which indicates the target), HTTP headers (which provide additional information to the server), and a request body (which contains the data, usually in JSON or XML). Common HTTP methods are GET (retrieving a record), POST (creating a new record), PUT (altering a record), PATCH (partially updating a record), and DELETE. The URL specifies which data is affected, whereas the request body applies to actions such as POST, PUT, and PATCH.

When a REST request is made, the server responds with HTTP headers defining the response, a response code indicating if the request was successful, and, in certain cases, a response body containing extra data. The response codes are categorized as follows: 200-level codes represent success, 400-level codes indicate client-side issues, and 500-level codes signify server-side faults.

To effectively test APIs, testers must first understand the types of REST queries supported by the API and any limitations on their use. Developers can use tools like Swagger to document their APIs. Testers should ask clarifying questions about available endpoints, HTTP methods, authorization requirements, needed data, validation limits, and expected response codes.

API testing often begins with creating requests via a user-friendly tool like Postman, which allows for easy viewing of results. The initial tests should focus on “happy paths,” or typical user interactions. These tests should include assertions to ensure that the response code is proper and that the delivered data is accurate. Negative tests should then be run to confirm that the application handles problems correctly, such as erroneous HTTP verbs, missing headers, or illegal requests.

Finally, the blog underlines the necessity of API testing and encourages engineers to transition from UI testing to API testing. This shift enables faster and more reliable testing, which aids in the detection of data manipulation issues and improves security.

Blog: https://simpleprogrammer.com/api-testing/

From the blog CS@Worcester – Matchaman10 by tam nguyen and used with permission of the author. All other rights reserved by the author.

Path Testing in Software Engineering

Path Testing is a structural testing method used in software engineering to design test cases by analyzing the control flow graph of a program. This method helps ensure thorough testing by focusing on linearly independent paths of execution within the program. Let’s dive into the key aspects of path testing and how it can benefit your software development process.

The Path Testing Process

  1. Control Flow Graph: Begin by drawing the control flow graph of the program. This graph represents the program’s code as nodes (each representing a specific instruction or operation) and edges (depicting the flow of control from one instruction to the next). It’s the foundational step for path testing.
  2. Cyclomatic Complexity: Calculate the cyclomatic complexity of the program using McCabe’s formula: E−N+2PE – N + 2P, where EE is the number of edges, NN is the number of nodes, and PP is the number of connected components. This complexity measure indicates the number of independent paths in the program.
  3. Identify All Possible Paths: Create a set of all possible paths within the control flow graph. The cardinality of this set should equal the cyclomatic complexity, ensuring that all unique execution paths are accounted for.
  4. Develop Test Cases: For each path identified, develop a corresponding test case that covers that particular path. This ensures comprehensive testing by covering all possible execution scenarios.

Path Testing Techniques

  • Control Flow Graph: The initial step is to create a control flow graph, where nodes represent instructions and edges represent the control flow between instructions. This visual representation helps in identifying the structure and flow of the program.
  • Decision to Decision Path: Break down the control flow graph into smaller paths between decision points. By isolating these paths, it’s easier to analyze and test the decision-making logic within the program.
  • Independent Paths: Identify paths that are independent of each other, meaning they cannot be replicated or derived from other paths in the graph. This ensures that each path tested is unique, providing more thorough coverage.

Advantages of Path Testing

Path Testing offers several benefits that make it an essential technique in software engineering:

  • Reduces Redundant Tests: By focusing on unique execution paths, path testing minimizes redundant test cases, leading to more efficient testing.
  • Improves Test Case Design: Emphasizing the program’s logic and control flow helps in designing more effective and relevant test cases.
  • Enhances Software Quality: Comprehensive branch coverage ensures that different parts of the code are tested thoroughly, leading to higher software quality and reliability.

Challenges of Path Testing

While path testing is advantageous, it does come with its own set of challenges:

  • Requires Understanding of Code Structure: To effectively perform path testing, a solid understanding of the program’s code and structure is essential.
  • Increases with Code Complexity: As the complexity of the code increases, the number of possible paths also increases, making it challenging to manage and test all paths.
  • May Miss Some Conditions: There is a possibility that certain conditions or scenarios might not be covered if there are errors or omissions in identifying the paths.

Conclusion

Path Testing is a valuable technique in software engineering that ensures thorough coverage of a program’s execution paths. By focusing on unique and independent paths, this method helps reduce redundant tests and improve overall software quality. However, it requires a deep understanding of the code and may become complex with larger programs. Embracing path testing can lead to more robust and reliable software, ultimately benefiting both developers and end-users.

All of this comes from:

Path Testing in Software Engineering – GeeksforGeeks

From the blog CS@Worcester – aRomeoDev by aromeo4f978d012d4 and used with permission of the author. All other rights reserved by the author.

Boundary, Equivalence, Edge and Worst Case

I have learned a lot about Boundary Value Testing and Equivalence Class Testing these past few weeks. Equivalence Class testing can be divided into two categories: normal and robust. The best way I can explain this through example. Let’s say you have a favorite shirt, and you lose it. You would have to look for it but where? Under the normal method you would look in normal, or in a way valid, places like under your bed, in your closet or in the dresser. Using the robust way, you would look in those usual spots but also include unusual spots. For example, you would look under your bed but then look under the kitchen table. You are looking in spots where you should find a shirt (valid) but also looking in spots where you should not find a shirt (invalid). Now, in equivalence class testing robust and normal can a part of two other categories: weak and strong. Going back to the shirt example, a weak search would have you looking in a few spots, but a strong one would have you look everywhere. To summarize, a weak normal equivalence class test would have you look in a few usual spots. A strong normal equivalence class test would have you look in a lot of spots. A weak and strong equivalence class test would act similarly to the earlier two, but they would have you look in unusual spots.

Boundary value testing casts a smaller net when it comes to testing. It is similar to equivalence class testing but it does not include weak and strong testing. It does have nominal and robust testing. It also has worst-case testing which is unique to boundary testing. I don’t know much about it, so I looked online.

I used this site: Boundary Value Analysis

Worst-case testing removes the single fault assumption. This means that there are more than one fault causing failures which leads to more tests. It can be robust or normal. It is more comprehensive than boundary testing due to its coverage. While normal boundary testing results in 4n+1 test cases, normal worst case testing results in 5n test cases. Think of worst-case testing as putting a putting a magnifying glass on something. From afar you only see one thing but up close you can see that there is a lot going on. This results in worst case testing being used in situations that require a higher degree of testing.

I have learned a lot in these past few weeks. I have learned about boundary testing and how it differs when it is robust or normal. I have learned about equivalence class testing and how it varies when it is a combination of weak, normal, robust or strong. I have also learned about edge and worst-case testing. This is all very interesting.

From the blog My Journey through Comp Sci by Joanna Presume and used with permission of the author. All other rights reserved by the author.

Constructing Software

I recently had a talk with one of my friends about certain ways that software should be constructed. One of the main points of discussion was the structure of classes in Object Oriented Programming, we also spoke about storage systems such as SQL and Flat File storage systems. I wanted to talk about this because it has actually had a huge impact on how I structure most of my projects made in java/C# now. For a while now the format of my code didn’t that visually appealing in the case of class structure and naming conventions. My naming conventions for the past few personal projects were very vague and once my friend gave me feedback on the code, it made me realize that when I re-read the code it came off as confusing, even though it was my project. Every project I’ve made so far after this discussion has come out to be exceptionally clean from my view point, the naming conventions are specific to their use case, the structure for classes interacting with each other is also very efficient now. I plan on continuing to use this methodology for more projects if it can fit into the scope of the project. The reason we discussed storage systems is because for a good amount of projects I’ve done terrible storage management, it was inefficient and quite ugly overall. With some projects needing a lot of data to be handled without anything being lost I would sometimes use YML, overall it is very easy to use but it was slower than other options such as JSON. Once I was shown how to implement JSON instead of YML I noticed significant changes in performance and the along with that process the code became much more readable. In regards to flat file storage, it seems to be very useful for smaller scale projects which is mostly what I work on, but in terms of larger data sets I was shown how to easily use SQL and a little bit of Prometheus to format the project even better. Since configuring and setting up flat files in a project takes a decent amount of code, setting up the database itself was way better. Using tables to grab information instead of having to manually create getters for each entry and key in a flat file was a very good quality of life update that was implemented into most of my other projects that had larger data. All I know now is that for any future projects depending on the data sets, I will be using JSON and SQL as their performance is a significant upgrade.

From the blog CS@Worcester – CS Blog by Mike and used with permission of the author. All other rights reserved by the author.

UML Diagrams

One of the first topics we were introduced to was UML diagrams, which are diagrams that represent something, like the order or the makeup in a visual way. In class, we learned UML class and sequence diagrams. The class diagrams visualize the code files, listing off necessary information, like variables and methods, whether they are private, public, static, abstract, and can have notes for things with special cases. The sequence diagram visualizes the sequence of lines of code in main. UML was also used in making diagrams to explain the architecture of systems.

In this blog post, by Nishadha, they go into detail about types of diagrams and provide examples. UML stands for Unified Modeling Language, and is used to model “software solutions, application structures, system behavior and business processes.” There are 14 types of diagrams, which can be organized into 2 kinds, structure diagrams and behavioral diagrams. The structure diagrams “show different objects in a system.” The class diagrams and architecture diagrams belong to this group. The behavioral diagrams “show what should happen in a system.” The sequence diagrams belong in this group.

Structure diagrams are used to show the relationship between classes and/or objects in a software system. There are seven different diagrams, class, component, deployment, object, package, profile, and composite structure. Each offers something a little different from the other, but do essentially the same thing. Class diagrams show the relationship of classes in software, but object diagrams show the relationship of objects in a real life setting. Finding one to use that matches the situation you need it for is not hard. 

Behavioral diagrams are used to show how things are supposed to work. Like structure, there are also seven, use case, activity, state machine, sequence, communication, interaction overview, and timing. Each are more unique compared to each other than the structure diagrams are, but also do the same thing, showing the flow of it. The flow of logging into an account, the flow of shopping, the flow of code, etc. By visualizing the flow, it can allow you to see what is happening, spots that could have issues, and maximize it to be the best it can be. The post includes examples and templates of each type of diagram in case someone is interested in seeing more or if someone wants to use one. 

I chose this particular blog post because I did not know that there were this many types of UML diagrams. With this broad amount, I can see it being used not just for software and businesses, but for other things as well. Visualizing things out is very beneficial, allowing you to keep track of things and make things concrete, instead of just picturing it in your head. I think UML is cool, it is certainly different from what I have learned in the past, but I will use it whenever I need to. It is also not that hard to learn, making it a little bit more accessible.

From the blog CS@Worcester – Cao's Thoughts by antcao and used with permission of the author. All other rights reserved by the author.

“Cheating” Is Okay, as Long as You Are Learnin

This week, I will be talking about “cheating,” or at least what some consider to be such when solving problems. Looking at the answer to a problem while learning, especially as a beginner, can be a valuable tool when used with thought and care. While some might argue that relying on solutions stifles problem-solving skills, it is important to recognize that learning is a repetitive process. When approached with the right mindset, checking the answer can enhance your understanding of the material and increase your chances of retaining that information.

First, looking at the answer can help clarify misunderstandings. Sometimes, learners might approach a problem with a misconception, incomplete understanding, or flawed method, which often leads to frustration and confusion. I can recall several times where I was required to solve a tough problem and hitting a wall when everything that I knew didn’t work for me. However, by reviewing the correct solution, they can pinpoint where their approach went wrong, gaining insight into both the problem-solving process and the underlying concept. This prevents the reinforcement of bad methods and allows for a more efficient learning experience.

In the podcast, John says, “If you’re trying to ride a bike, you’re not going to go without the training wheels on for the very first couple of runs–it’s totally okay to peek at the solution to get past whatever wall, or to see what new technology you just weren’t even looking at because you didn’t know it was a thing.” Additionally, seeing the correct answer like this can serve as a model for solving future problems. It provides an example of how to think critically and apply things in practice.

However, for this to be truly beneficial, you need to treat looking at the answer as part of the learning process, not the end. It’s important for learners to first make an honest attempt to solve the problem on their own. After reviewing the answer, they should take the time to understand each step fully, perhaps reworking the problem from scratch without referring to the solution. This ensures that they’re not just memorizing answers but are internalizing the concepts and strategies necessary for independent problem-solving.

In conclusion, looking at the answer to a problem is okay, and even beneficial, as long as the goal is learning. When used properly, it helps clarify misunderstandings, serves as a guide for future problems, and promotes deeper understanding. The key is to approach it with curiosity and a willingness to engage deeply with the material, ensuring that it enhances, rather than replaces the learning process.

This episode can be watched in full, for free here on YouTube: https://www.youtube.com/watch?v=T7AaBcNj-mA&ab_channel=noobs%2F%2FaNetworkChuckPodcast

From the blog CS@Worcester – Owen Santos Professional Blog by Owen Santos and used with permission of the author. All other rights reserved by the author.

Crafting Clean Code

Link to blog: https://codingsans.com/blog/clean-code

Clean code has been a major topic in our discussions for the past few weeks, as we’ve seen plenty of examples of both clean and bad code. We’ve learned the reasonings for why someone would write bad code, as well as the importantance of clean code. While those discussions enhanced my understanding of clean code, I wanted to take a deeper dive into it and get another person’s viewpoint on the topic. Through my research, I found Clean Code: The Manager’s Guide to Building Quality Software by Gábor Zöld.

Throughout his blog, Zöld provides a deep dive into what clean code is and why it is important, as well as general principles one should follow when writing code. He also provides advice for managers who may be looking for ways to encourage their team members to prioritize the use of clean code in their projects. He also provides an overview on the current state of software development as well as the ethics that a developer should be following when writing code.

I chose this resource because it applies to what we have been learning for the past few weeks, and it provides insight on how clean code operates in a workplace. The blog is based off of the clean code principles from Robert C. Martin, which we went over in class. This blog is also very well organized and easy to read, which shows that Zöld clearly knows what he’s talking about.

The key takeaways that I would have from reading this blog post would be to esnure that the functions that I write are as small as possible and contain descriptive variable names. I also came away with the fact that clean code actually makes you work faster, as if you are trying to push code out to meet a deadline you won’t be as efficient coding in an unorganized workspace. From an employee’s point of view, it is very important to focus on the quality of your code, as well as to understand the human aspect of it and to be trustful and understanding of other developers.

This material made me more aware of some of the bad habits that I have when writing code. I now notice that sometimes I will rush code just to try to finish it fast, but it ends up leaving a messy chunk of code that I have to reorganize later. However, if I followed the principles of clean code from the beginning, it would’ve saved a lot of time and also would’ve been more beneficial to my learning. Moving forward, I’m going to apply these principles to all of my future coding projects by focusing on the names and sizes of functions, ensuring that the names are descriptive and that they don’t take up too much space. I’m also going to apply some of Zöld’s comments regarding the workplace in the future, specifically his points of being understanding towards your other developers and coworkers.

From the blog CS@Worcester – Coding Canvas by Sean Wang and used with permission of the author. All other rights reserved by the author.

Uncovering Other Types of Software Architecture

Link to blog: https://dzone.com/articles/top-10-software-architecture-patterns-to-follow

Recently, we have covered a few different types of software architecture, namely Monolith, Client-Server, and Microservices. While these each have their respective use-cases, I wanted to explore more possibilities for the architecture of a software. Because of this, I have found the Top 10 Software Architecture Patterns to Follow in 2024 by Lucas Lagone. 

Lagone establishes a format to his top ten list by describing the overview, benefits, and use-case for each item. His number 1 software architecture pattern is layered architecture. According to Lagone, layered architecture (or the N-tier architecture), divides the software into several layers each responsible for their own functionalities (Lagone). This type of architecture encourages modularity, allowing for independent development of different layers of the software,  and can be utilized for a variety of types of applications (Lagone).

Lagone continues his list, discussing the rest of the architectures below:

  • Serverless Architecture
  • Hexagonal Architecture
  • Event Sourcing
  • Model-View-Controller
  • Microservices Architecture (covered!)
  • Service-Oriented Architecture
  • Event-Driven Architecture
  • Repository Pattern
  • Reactive Architecture

Lastly, Lagone writes about how one can choose the right architecture for their needs, and what factors come into play. Some of these factors include the complexity of the problem domain, the project’s requirements and goals, the scalability of the project, security, team expertise, etc. (Lagone). 

I selected this resource because it is a recently written (November 2023) article that is a good reference for an updated list of software architectures currently in use. It’s a great resource that clearly gets the point across for each item in the top 10, whilst being a short enough read to be a good refresher for those unfamiliar with the information. 

Personally, I found this to be the perfect article for me to quickly learn more software architecture patterns. It doesn’t go too deep into the software jargon and terminology that only an experienced professional would know, yet it still talks to you in a way that assumes you are an educated yet learning individual. I really liked the formatting of the top 10 list, as it gave organization, purpose, and distinction to each of the items. I learned many other types of architecture, and specifically what to consider when selecting a software architecture pattern for your project/team. This is probably the most important part, as choosing an incompatible pattern for your needs could set you back even further than you were before. I hope to apply this to future jobs where I can contribute to the selection of an architecture based on this knowledge.

From the blog CS@Worcester – Josh's Coding Journey by joshuafife and used with permission of the author. All other rights reserved by the author.

CS 448-01 Team 3: Sprint 1 Retrospective (2/22)

This beginning of the sprint was a very weird sprint, but me and my team managed to make it through without much trouble.

Seeing how the directory for AddInventoryBackend works, it was easy enough to move the files from their VSCode locations to the directory paths that GitPod uses to be able to use the important files. For this sprint in general, we just needed to move the Linters to the correct directories and then try to run them as best as possible. For the Linters that did not work, we either replaced them with other Linters that were accessible through GitPod, or we just removed them. Since we only needed specific Linters to use for our project, our team were able to confirm that we had a sufficient amount of Linters needed thanks in part to consulting our product owner. Also since I used GitLab, creating issues and labeling them were not too difficult either as we were familiar with managing our issue boards, especially since we learned about workflows in a previous class about Software Process Management.

What did not work well was that I was having a very hard time with trying to use GitLab and GitPod, because I had never directly worked on issues before, making it more difficult for me to fully understand how to utilize my environment until near the end of the sprint. I had made myself a note for the next sprint to remember what I have done for this sprint and what else I had to do for next sprint, because I am very mistake-prone when working on a new IDE. GitPod’s changes are new and more convenient, but as someone who has used other IDEs such as Visual Studio Code and Eclipse to name a few, this was a completely new environment that was very unfamiliar to me. While I did make a few notable mistakes like not understanding how to create merge requests or which tags to use, my team guided me to learn how to be able to make those changes by myself after lots of practice.

As a team, we were really prioritizing meeting up together as necessary as possible. We considered using Discord as a means of having our virtual calls since that is where we were going to communicate and do our stand-ups anyways. However, we found that meeting up in-person was much better for us as working on a sprint by call is not consistent with us since joining a Discord call is too inconvenient and takes up too much time. Like I said before, managing GitLab was not too difficult since we all have experience with Scrum from our previous class. I think the best part about our team is that we are very open to helping each other whenever we were stuck on any issues relating to tasks like with the Linters.

For my individual work in the sprint I had done a couple issues to start out with the sprint. I moved the shell script commands from the /Commands bin to /Bin since that is how we were going to organize our shell-scripts like our lint.sh script (https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem/addinventoryfrontend/-/issues/32). Another task I did was very similar to the first one, except I am instead moving the Docker files to specific directories in GitPod (https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem/addinventoryfrontend/-/issues/33). The Linter task that I did was to add AlexJS to GitPod so then we can utilize a new Linter to help with checking our code for our project (https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem/gitlab-profile/-/issues/83). I did all of those tasks before I would verify to make sure that our entire repository was in the correct state (https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem/addinventoryfrontend/-/issues/35). Overall, I think that I am doing good so far individually with the sprints. The one thing I need to work on as a team member is speaking out whenever I need help or so I can find something in particular to do in the sprint since it is not just my team who has to contribute to our work. I am hoping for this next sprint, I can get a specific issue that I can work on to contribute using my skills that I have learned from my previous classes.

From the blog CS@Worcester – Elias' Blog by Elias Boone and used with permission of the author. All other rights reserved by the author.

sweep the floor

When you first get placed on a team, it’s sometimes hard to get your bearings if you aren’t given an explicit task to work on to begin with. We sort of experienced this at the start of the semester, where we weren’t all too familiar with the Thea’s Pantry project even after working with forks of it for assignments in prior courses. Part of this is just the nature of working with something new, and having the expectation of providing value to it in a practical sense.

In this way, it makes sense the way we approached the issues that we took for the first sprint. For the most part, these issues were fairly simple and mostly plumbing, which can also be classified as sweeping the floor. The idea for sweeping the floor is that you take simple tasks that, while necessary, aren’t all that interesting, in order to build confidence in yourself with the project and build rapport with the team.

The authors make a good point in that it might not feel great to do as someone with a Computer Science degree that you worked hard for, but the reality is that the degree is really just a way to get your foot in the door. Same with any other way that you gained the qualifications to get accepted for a job or project that you’re working on. The real work is the work you do when you’re placed into a project, where you really get to apply the basics that you learned in college while also learning more practical skills.

I really think this is a good approach to take when you feel out of your element in a new environment. Maybe you just got hired for your first internship or even first full-time job, and while you are excited, you don’t really know how to actually provide value to the project, because you haven’t necessarily had that sort of experience before. This seems really helpful to get your bearings in a new project. The authors do mention a couple drawbacks to this approach, with the most notable one being the feeling of being stuck doing the small tasks without branching out due to anxiety, but there are ways out of this mindset too.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.