Category Archives: Week 7

Code Review

For this week’s blog post, I chose the article “Code Review Best Practices – Lessons from the Trenches” by Drazen Zaric. I chose this article because its topic fits perfectly with the cove review segment in the syllabus. This article discusses why you should do code reviews, how code reviews act as quality assurance, how code reviews function as a team improvement tool, how to prepare a pull request for preview, and, of course, how to review code. In this article, I will be discussing why you should do code reviews and how they work as quality assurance.

Reviewing code is one of the most essential parts of the development process. “It should be obvious that the primary purpose of code review is to assess the quality of the changes being introduced. I mean, the dictionary definition of review says precisely that ‘review (noun) – a formal assessment of something with the intention of instituting change if necessary.’ Of course, code being code, there’s a lot of things that can be checked and tested automatically, so there’s nuance to what needs to be checked in an actual code review.” As mentioned in the article, there are many things that can be tested and should be tested. This leads to a need for many people to review your code, and you will need to review many other people’s code to make sure the best possible software is being developed. Quality assurance must be done well as a significant part of making sure your software is the best it can be.

In this section of the blog post, I will discuss how the article mentions how code review is incredibly useful in quality assurance. “There are many ways in which code reviews help maintain the quality bar for the codebase and the product. In the end, it comes down to catching mistakes at a level that can hardly be automatically tested, such as architectural inconsistencies. Also, the code for automated tests should be reviewed, so there’s a meta-level at which reviews help with QA.” As mentioned, code review’s main boon to quality assurance is finding issues that can’t, or often need to be caught, through traditional testing methods, like automated testing. The article also mentions using checklists for storing what needs to be checked and how and what the results of said checks should be. “You can have your own checklist or make it a shared list for the team or a project. There’s a ton of material written on the usefulness of checklists. In Getting Things Done, David Allen puts forward a simple idea – our minds are great at processing information but terrible at storing and recalling it. That’s why checklists are a great way of externally storing and breaking down a planned or repetitive task.” Having a method of keeping track of what is done, what needs to be done, and what is incomplete is essential in working on any large project, let alone on a software development project.

From the blog CS@Worcester – P. McManus Worcester State CS Blog by patrickmcmanus1 and used with permission of the author. All other rights reserved by the author.

Static Testing VS Dynamic Testing

The blog post highlights the qualities as well as the differences between these two types of testing, static testing and dynamic testing. I chose this blog post because as this course covers software quality assurance and testing as well as the fact that we have spent time in class covering these two types of testing I believe that this blog post is able to highlight and reinforce core concepts that will be able to assist in gaining further knowledge withing this class. In addition, the blog post is able to explain the information regarding static and dynamic testing in a simple and easy to understand format that further compliments the idea of using this blog post as a resource to reinforce these topics.

The blog post as previously discussed covers static testing vs dynamic testing. We learn that static testing involves testing the code without actually running it while dynamic testing involves running the code to test it’s outcomes through various testing circumstances. From those two descriptions we can tell how static testing relies on testing through software documentation as well as the design of the code itself. However, dynamic testing is able to execute the program allowing the testers to test how the code will work in an assortment of testing scenarios. This concept allows testers to see how the code will work once it is released to the public to verify that it works as intended. These two types of testing are very different in their methods of achieving a successful test. Static testings wants to identify problems and improve on them early in development while dynamic testing wants to validate the performance and functionality of the code once it is in a executable state. Since they have different intents your requirements for the project is what determines what testing you will choose.

From what I have learned from the blog post I believe it was very helpful and was able to reinforce core concepts that will help me further in this class. I believe learning more about static and dynamic testing will help me when it comes to working in this class as well as assist in knowing how to test in a professional setting. Knowing the core differences between the two will allow me to know what type of testing will be best for certain circumstances when it comes to projects. In conclusion, this blog post was very helpful and will be utilized in the future.

https://testsigma.com/blog/static-testing-and-dynamic-testing

From the blog CS@Worcester – Giovanni Casiano – Software Development by Giovanni Casiano and used with permission of the author. All other rights reserved by the author.

Equivalence Class Testing

A Critical Component of Software Quality Assurance

Equivalence Class Testing stands out as a highly efficient and systematic approach. This blog post delves into the concept of Equivalence Class Testing, its significance in SQA, and how it fits into the broader context of software testing.

Understanding Equivalence Class Testing

Equivalence Class Testing is a black box testing method used to divide the input data of a software application into partitions of equivalent data from which test cases can be derived. An equivalence class represents a set of valid or invalid states for input conditions.

The main advantage of Equivalence Class Testing is its efficiency. Instead of testing every possible input individually, which can be impractical or impossible for systems with a vast range of inputs, testers can cover more ground by focusing on one representative per equivalence class.

Identifying Equivalence Classes

Equivalence classes are typically divided into two types: valid and invalid. Valid equivalence classes correspond to a set of inputs that are expected to be accepted by the software system, leading to a correct output. The process of identifying these classes involves analyzing the software specifications and requirements to understand the input data’s boundaries and constraints.

The Role of Equivalence Class Testing in SQA

Software Quality Assurance encompasses a wide array of activities designed to ensure that the developed software meets and maintains the required standards and procedures throughout its lifecycle. Equivalence Class Testing fits into the SQA framework as a key component of the testing phase, contributing to the overall goal of identifying and mitigating defects.

By integrating Equivalence Class Testing into the SQA process, organizations can achieve several objectives:

  1. Enhanced Test Coverage: Equivalence Class Testing allows teams to systematically cover a wide range of input scenarios, thereby increasing the likelihood of uncovering hidden bugs.
  2. Efficiency and Cost-Effectiveness: By reducing the number of test cases without sacrificing the breadth of input conditions tested, teams can optimize their resources and save significant time and costs.
  3. Improved Software Quality: By ensuring that different categories of input are adequately tested, teams can enhance the robustness and reliability of the software product.

Implementing Equivalence Class Testing

To effectively implement Equivalence Class Testing, teams should follow a structured approach:

  1. Review Requirements and Specifications: Begin by thoroughly analyzing the software requirements and design documents to identify all possible input conditions.
  2. Identify and Define Equivalence Classes: Classify these input conditions into valid and invalid equivalence classes.
  3. Design and Execute Test Cases: Develop test cases based on representative values from each equivalence class and execute them to verify the behavior of the application.
  4. Evaluate and Document Results: Record the outcomes of the test cases and analyze them to identify any deviations from the expected results.

 

I was based to this blog: https://www.celestialsys.com/blogs/software-testing-boundary-value-analysis-equivalence-partitioning

From the blog CS@Worcester – Coding by asejdi and used with permission of the author. All other rights reserved by the author.

“Concrete Skills”: Building a Tangible Foundation for Growth

Summary of the Pattern: The “Concrete Skills” pattern emphasizes the importance of acquiring skills that can be directly applied in a professional setting, particularly for individuals in the early stages of their career in technology. It advocates for the development of a portfolio of practical, demonstrable skills that make one a valuable team member from day one. These skills not only facilitate smoother transitions into new roles or projects but also enhance an individual’s employability and credibility within their field.

My Reaction: The straightforwardness of the “Concrete Skills” pattern immediately resonated with me. It serves as a pragmatic reminder that, amidst the allure of high-level theories and complex conceptual frameworks, the ability to contribute tangibly and effectively to a project is invaluable. This pattern has reinforced my belief in the necessity of balancing theoretical knowledge with practical skills that can be immediately applied to solve real-world problems.

Insights and Changes in Perspective: Delving into this pattern prompted me to reassess my skill set and identify areas where my capabilities could be seen as both concrete and valuable in a team setting. It has shifted my focus towards a more balanced approach to learning, where equal weight is given to both the acquisition of theoretical knowledge and the development of practical, hands-on skills. This realization has not only influenced the way I plan my learning goals but has also altered how I present myself professionally, emphasizing skills that can have an immediate impact.

Disagreements and Critiques: One potential critique of the “Concrete Skills” pattern might be its perceived emphasis on the immediate applicability of skills, which could inadvertently sideline the long-term benefits of foundational, theoretical knowledge. However, I believe the pattern does not dismiss the value of theory but rather highlights the importance of having a well-rounded skill set. It’s about striking the right balance between being able to contribute now and laying the groundwork for future innovations.

Conclusion: The “Concrete Skills” pattern has profoundly influenced my approach to professional development, highlighting the importance of building a repertoire of skills that are as tangible as they are valuable. It serves as a guide for navigating the complex landscape of technology careers, where the ability to demonstrate concrete skills can set one apart in a competitive field. As I continue to advance in my career, I am motivated to cultivate a diverse set of skills that not only underscore my theoretical knowledge but also showcase my capacity to contribute meaningfully to any team or project.

From the blog CS@Worcester – Abe's Programming Blog by Abraham Passmore and used with permission of the author. All other rights reserved by the author.

The Evolution of Kanban: A dive into another methodology

having taken a look at Kanban (which was part of our homework task), a powerful framework for software development teams, has become a cornerstone for teams aiming to enhance efficiency and transparency in their workflow. Originating over 50 years ago from Toyota’s manufacturing processes, the methodology has seamlessly transitioned into the realm of software development. In this blog post, we’ll delve into the key principles of Kanban, its historical roots, and its application in modern agile practices.

from what information i was able to gather, Kanban is grounded in the just-in-time (JIT) manufacturing process that Toyota pioneered in the late 1940s. it draws inspiration from supermarkets stocking products based on consumer demand, to which Toyota aimed to align their factory inventory levels with actual material consumption. The implementation involved passing a visual signal, known as a “kanban,” between teams to communicate real-time capacity on the factory floor and with suppliers.

Fast forward to today, and agile software development teams have embraced these JIT principles to optimize work in progress (WIP) and match it with the team’s capacity. The heart of Kanban lies in visualizing work, limiting WIP, and ensuring real-time communication of capacity.

Central to the Kanban methodology is the Kanban board, a visual project management tool available in physical or digital form. This tool aids in visualizing work, limiting work-in-progress, and maximizing efficiency or flow. Whether physical or virtual, the Kanban board serves as the single source of truth for the team’s work, ensuring transparency and real-time communication of capacity.

Kanban offers a plethora of advantages for software developing teams, making it one of the most popular software development methodologies today. The flexibility in planning, shortened time cycles, fewer bottlenecks, and the use of visual metrics contribute to its widespread adoption. The ability to adjust priorities without disrupting the team, optimize cycle time, and limit WIP ensures that Kanban is not only effective but also adaptable to different team structures and objectives.

in terms of what stuck with me the most after reading the article, i’d say the historical connection to Toyota’s manufacturing processes highlighted the enduring nature of concepts like just-in-time manufacturing and efficient inventory management. Understanding how these principles seamlessly translated into the realm of software development underscored the universal applicability of Kanban aswell as the emphasis on visualization through Kanban boards and the use of cards for each work item struck me as a simple yet powerful way to enhance collaboration and transparency within a team. The flexibility in planning, especially the ability to reprioritize work without disrupting the ongoing tasks, stood out as a valuable feature. This aligns well with the dynamic nature of software development, where changes in priorities are not uncommon.

https://www.atlassian.com/agile/kanban

From the blog CS@Worcester – CSTips by Jamaal Gedeon and used with permission of the author. All other rights reserved by the author.

What is Refactoring?

Refactoring is the process of restructuring a code without changing or adding to its functionality and external behavior. There are a lot of ways to go about refactoring but it mostly goes towards applying standard basic actions. These changes in the existing code save the software’s functionality and behavior since the changes are so tiny, they are less likely to cause any errors. So, what is the point of refactoring? Well, refactoring is to turn any messy confusing code into a clean understandable one instead. When a code is messy it means that the code is hard to understand and maintain, when it’s time to add a required functionality it causes a lot of problems with the code because it’s confusing already. With a clean code, it’s easier to make any changes and improve on any problems. Also with a clean code anybody who ever works with the code is able to understand the code and can appreciate how organized it is. When a messy code isn’t cleaned up it can affect any feature developments because developers have to take more time to understand the code and track the code so that they can make any changes themselves.

Knowing when to refactor is important and there are different times to refactor your code. Like refactoring, while you’re reviewing the code, reviewing the code before it goes live is the best time to refactor and make any changes you can before pushing it through. You can also schedule certain parts of your day to refactor your code instead of doing it all at once. By cleaning your code you are able to catch any bugs before they create any problems in your code. The main thing about refactoring your code is that cleaning up a dirty code can reduce technical debt. Clean code is easier to read and if anybody else besides the developer works on that code they are also able to easily understand that code as well as maintain and add features to it. The less complicated the code is, it can lead to improved source-code maintenance. With a clean code, the design can be used as a tool for any other developers it can become the basis of a code elsewhere. This is why I believe that refactoring is important because changing just the smallest piece of code can lead to a better functional approach to programming. It helps developers get a better understanding of the code as well as making better design decisions.

From the blog CS@Worcester – Kaylene Noel's Blog by Kaylene Noel and used with permission of the author. All other rights reserved by the author.

rest apis

REST stands for Representational State Transfer. This means that when a client requests a resource using a REST API, the server transfers back the current state of the resource in a standardized representation.

From the blog CS@Worcester – Andres Ovalles by ergonutt and used with permission of the author. All other rights reserved by the author.

law of demeter

During the course of this course (Software Construction, Design, and Architecture), there have been design concepts that are very easy to grasp at first glance, and those that take much more time to digest. I was surprised to see that the Law of Demeter, or Principle of Least Knowledge, is a fairly intuitive rule, but feels almost too restrictive in nature.

Essentially, the Law of Demeter is the concept that methods in an object should not communicate with any element that isn’t ‘adjacent’ to it. According to a blog post by JavaDevGuy (and thinking of a Java application to the rule), the elements that are allowed by the law are the object itself (this), objects in the argument of the method, instance variables of the object, objects created by the method, global variables, and methods that the method calls.

This is most easily explained by a negative example. For example, if a class Car has a method with a Dashboard object as an argument, it can technically call something like dashboard.getVersion(). But if a class Garage method has a Car argument, the method should not call something like car.getDashboard().getVersion(). Maybe this is a silly example, but this applies to more practical code as well.

JavaDevGuy goes further to say that most Getter methods violate this law. This interpretation seems restrictive, as it makes it much more difficult to just get work done (of course, I’m not the most experienced in the design aspect of software engineering so I could be wrong). It seems more practical to use the law to get rid of chaining in your code, as it causes needless complexity. Chaining methods together, regardless of how necessary it is, always ends up looking kind of ugly. I do feel like it is a necessary evil sometimes though.

As it stands, I can understand that this sort of practice can minimize that amount of complexity and reduce code repetition, but it does feel like sometimes you sort of need to put things together in this way to get the desired output. The aforementioned blog post seems to explain when code is violating the law, but unless my eyes and brain are too tired to read properly, the author doesn’t really give any good replacement options for the code. The few alternatives given don’t seem very desirable. This is typically the problem with negative rules, it imposes a restriction without a solution, and so you have to scramble to figure out how to work it.

Perhaps I’ll understand better when we cover this material in class.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

CS343 – Week 7

SOLID is an acronym that stands for the 5 different principles that help write high-quality, maintainable, and scalable code. These include Single Responsibility Principle (SRP), Open-Closed Principle (OCP), Liskov Substitution Principle (LSP), Interface Segregation Principle (ISP) and Dependency Inversion Principle (DIP). Each provide their own benefits and work in tandem together when designing a program.

SRP states that a class should only have one responsibility, meaning only one reason to change. This helps prevent a class from having too many responsibilities that can affect each other when one is changed. Following SRP ensures that the code will be easier to comprehend and prone to fewer errors. However, it is harder than it sounds to fulfill this principle. The quickest solution to adding a new method or functionality would be to add it to existing code, but this could lead to trouble down the road when trying to maintain the code.

OCP states that software classes, modules, functions, etc. should be open for extension but closed for modification. This is essential because it allows entities to be extended without modification so that developers can add new functionality without risking the chance of breaking the code. Adding an additional level of abstraction with the use of interfaces help design the program to provide loose coupling.

LSP states that any instance of a derived class should be substitutable for an instance of its base class without affecting the program in a harmful way. The importance of this principle revolves around the ability to ensure the behavior of the program remains consistent and predictable. Unfortunately, there is no easy way to enforce this principle, so the user must add their own test cases for the objects of each subclass to ensure that the code does not significantly change the functionality.

ISP focuses on designing interfaces that are specific to a user’s needs. Instead of creating a large interface that covers all methods, it is more beneficial to split up the methods across smaller, more focused interfaces that are less coupled. For example, having too many methods in an interface can sometimes cause issues in the code, so separating the methods into individual interfaces that can be implemented by a certain class.

DIP states that high-level modules should not depend on lower-level modules but should depend on abstractions. This approach aims to reduce coupling between modules, increase modularity, and make the code easier to maintain, test, and extend. An important thing to note is that both high-level and low-level modules depend on abstractions. Dependency Inversion utilizes the SOLID principles which in turn leads to a more refined and maintainable code.

From the blog CS@Worcester – Jason Lee Computer Science Blog by jlee3811 and used with permission of the author. All other rights reserved by the author.

Encapsulating What Varies

For this week’s blog post, I chose to discuss the design principle of encapsulating what varies, as described in the article “Encapsulate What Varies (EWV)- Design Principle” by Pankaj. This article discusses the rationale behind encapsulating what varies, its two aspects, its benefits, the strategies of implementing encapsulating what varies, and gives an example of what encapsulating what varies looks like. This article falls in line with the design principle section of the syllabus and its sub-section encapsulating what varies. I also enjoyed the section discussing the benefits of encapsulating what varies. In this blog post, I will review a few of the benefits of encapsulating what varies are mentioned in the article.

The first benefit mentioned in the article that I will be discussing is flexibility. “Flexibility: By encapsulating the parts of the system that are subject to change, we make it easier to modify or replace those parts without affecting the rest of the system. This makes the system more flexible and easier to adapt to changing requirements.” As stated in the article, when encapsulating what varies, your resulting project becomes significantly more straightforward to maintain and update in the future. Instead of having to adjust one extensive method or class, possibly causing conflicts with other parts of the project, if you isolate the factors that are likely to change in the future, you prevent possible issues with updates causing problems with unrelated systems.

The next benefit that I will be discussing that comes with encapsulating what varies is reusability. “Reusability: By creating abstractions that represent the parts of the system that are subject to change, we make it possible to reuse those abstractions in different contexts. This can save time and effort when developing new features or applications.” Being able to reuse aspects of your code in other areas is very useful in reducing time developing and bug testing. Instead of making more methods or classes that could conflict with preexisting ones, you are reusing methods or classes that you already know will not conflict with other areas in your project.

Finally, I will discuss one of the greatest benefits that come with encapsulating what varies: maintainability. “Maintainability: By isolating the impact of changes to a specific part of the system, we make it easier to maintain the system over time. This reduces the risk of introducing unintended side effects or breaking existing functionality.” As I have also mentioned earlier in this blog post, isolating frequently changing parts of the project makes it much easier to diagnose bugs or other issues that may come up as the project is developed or as it is updated as time goes on.

Article: https://www.neatcode.org/encapsulate-what-varies/

From the blog CS@Worcester – P. McManus Worcester State CS Blog by patrickmcmanus1 and used with permission of the author. All other rights reserved by the author.