Category Archives: Week 4

Embracing “Breakable Toys”: A Pathway to Mastery

Summary of the Pattern: The “Breakable Toys” pattern addresses the challenge of growing in a professional environment where failure can have significant consequences. This pattern encourages individuals to create a safe space for learning by developing projects or systems on their own time, which they can afford to break or mess up without risking their professional standing. The essence of this approach is to allow for experimentation, learning from mistakes, and thereby gaining confidence and skill in a low-stakes setting.

My Reaction: Initially, the concept of “Breakable Toys” struck me as a simple, yet profoundly effective way to cultivate new skills and deepen existing knowledge. It’s the permission to fail – but on my own terms – that I found incredibly liberating. This pattern resonates with me deeply, as it underscores the importance of self-directed learning and the intrinsic value of failure as a stepping stone to mastery.

Insights and Changes in Perspective: The pattern illuminated the stark contrast between the safety nets often present in educational settings and the high-pressure environments of the professional world. It made me reflect on my approach to learning and professional growth, emphasizing that the path to expertise is paved with mistakes and learning opportunities. Adopting this pattern has encouraged me to shift my mindset from fear of failure to embracing it as a crucial component of my development process. It’s a paradigm shift that I believe will not only make me a more resilient professional but also a more inventive and adaptive problem-solver.

Disagreements and Critiques: While I wholeheartedly embrace the concept of “Breakable Toys,” I recognize that not everyone may have the luxury of time or resources to invest in side projects. For some, the pressures of life or work may make it challenging to find space for such endeavors. However, I believe the underlying principle of seeking out low-risk opportunities to learn and grow can be adapted in various ways, such as through virtual labs, open-source contributions, or even thought experiments.

Conclusion: The “Breakable Toys” pattern has not only broadened my understanding of learning in a professional context but also invigorated my approach to personal and professional development. It serves as a reminder that the journey towards mastery is personal, non-linear, and fraught with setbacks, all of which are invaluable learning opportunities. As I move forward in my career, I intend to keep this pattern close to my heart, ensuring that I never lose sight of the importance of playful experimentation and the lessons learned from each “breakable toy” I encounter.

From the blog CS@Worcester – Abe's Programming Blog by Abraham Passmore and used with permission of the author. All other rights reserved by the author.

Understanding Test Driven Development in Software Engineering

Test Driven Development is a method in software development in which tests are created before the actual code. By writing test before hand, developers have a clear understanding of what will need to implemented, which can help avoid unnecessary errors. This approach will help to have the software behave as expected. This method is a structured and systematic approach. This test method is doesn’t just focus on testing, however it also focuses on quality and behavior. The main goal of this test method is to ensure that the code meets the specified requirements and behaves as expected. Test Driven Development helps with clarifying requirements, reducing defects, and improving the code maintainability.

How does it work?

Test Driven Development operates on a cycle like, first write a test, second make it run, third change the code to make it pass, and then repeat. Developers should first write a test that will analyze the behavior that they want implement. After running the test and writing and rewriting the code to make the test pass, developers will then need to continue reframing the code to improve it’s design and maintainability without changing it’s behavior. This process will make sure that that each piece of code is throughly tested and validate before moving on to the next.

Test Driven Development vs Traditional Testing

The difference between Test Drive Development and Traditional testing is that test driven development method has a different approach and objective. Traditional testing methods usually aim to find bugs or even defects in code, test driven development mainly focuses on making sure that the code meets the specific requirements. A failed test in test driven development method tells the developers to write new code to fulfill the requirement’s, and it also tends to make sure that the code will be lead to higher quality, with fewer defects.

There are also two levels of test driven development that focuses on different aspects of software development. Those two levels being Acceptance TDD and Developer TDD. Acceptance involves writing acceptance test that verify the overall behavior of the systems based on the users requirements. Development TDD, focuses on writing unit tests for individual components or modules of the system.

Why Did I pick this Article?

I chose this article because Test Driven Development is a very important concept in software engineering. This article has taught me a lot about test driven development , which include numerous benefits like improved quality of code, reduced bugs and fewer defects, and faster development cycle. These many advantages are valuable for any software development project.

Reflection

After reading this article, i have learned a lot about Test Driven Development and it’s many advantages. One key take away for me was how in this method developers are to write test before actually writing the code, which can help in clarifying and ensuring that the code is correct and meets the required specifications. I also found how the article talks about the difference between test driven development and traditional testing methods. Learning about the Acceptance TDD framework helped me with my understanding of how test drive development can be scaled for larger projects and integrated into Acceptance methods.

Now that my understanding has been enlighten with this new found valuable knowledge and insights into test driven development methods, I can apply this in my future software development projects. I will also be able to writer better, cleaner, and more maintainable code when using this method.

Article link is Here: https://www.guru99.com/test-driven-development.html

From the blog CS@Worcester – In's and Out's of Software Testing by Jaylon Brodie and used with permission of the author. All other rights reserved by the author.

Specification Based Testing

https://www.geeksforgeeks.org/specification-based-testing

  Specification based testing is a type of software testing that directly uses a systems specification in order to design tests. WHile considered a type of black box testing specification based testing can be used on any type of system but works particularly well when testing web applications. When practicing specification based testing you would first look at the code and its documentation in order to see what must be tested based on its functionality and uses. The objectives of specification based testing are functional accuracy, conduct based compliance, error detection, reliability/robustness, full coverage and compliance with the standards.

  There are 7 types of specification based testing listed which are “state transition” which is used to uncover errors in a system when switching states, “decision table” testing which can test functional and non functional system requirements, “equivalence partitioning” which is similar to decision table testing except it is best used when testing different inputs while decision table testing is best used on different combinations of inputs. Then there is “boundary value analysis” which consists of creating a test case for each boundary value, “all pair testing” which makes a test case for every possible input, “classification tree” is based on deriving test cases from a decision tree and “use case” testing which focuses on the functionality of the entire system rather than any individual component.

  I selected this topic/source as we have been practicing specification based testing in class recently and I was interested in researching more into the topic and the different types of testing that are classified as specification based and this article from “geeksforgeeks” went in depth not only into what the definition of specification based testing is but also the different types and the objectives of this type of testing. 

  This article was very helpful in my opinion as is gave me a perspective on specification based testing as a whole and after practicing some of the different types of specification based testing in class and reading this article I have a greater understanding or not only what this type of testing is but also how to go about testing in this fashion and writing test cases which pertain to the specifications of a system. I would consider this type of testing to be one which is more self explanatory than others but being able to understand its importance and its limitations is something that will help me when testing in the future.

From the blog CS@Worcester – Dylan Brown Computer Science by dylanbrowncs and used with permission of the author. All other rights reserved by the author.

Black-Box, White-Box, and Grey-Box Testing

This week I decided to find a blog that discusses the differences between Black-Box, White-Box, and Grey-Box Testing because even I didn’t grasp it during the class activity. The article “Difference Between Black-Box, White-Box, and Grey-Box Testing” by TestFort Expert explains each way of testing in depth by using bullet points to outline the techniques of each type of testing and reviews the pros and cons of each.

The blog summarizes that the black-box testing method involves testing the software without knowledge of the internal structure and source code. You use this method to test the interface against the specifications that the client gives. The techniques that are involved are decision table testing, error guessing, all-pairs testing, and equivalence partitioning. The first technique tests with if-then-else and switch-case statements to find errors related to conditions. The second technique describes testing based on intuition, the third tests combinations of each pair of input parameters to find bugs from interacting parameters and the last technique involves splitting up parts to reduce testing time. The method tests for functionality but can only be applied to small segments of code.

White-box testing tests the internal structure of software and the logic behind it. One needs full knowledge of the code and the software for this method. It uses these techniques: control flow testing, data flow testing, and branch testing. The first technique logic of the code by executing input values and comparing for expected results. The second detects improper use of data values and data flow by coding errors. The last technique focuses on validating branches in the code. The blog mentioned that technical debt is reduced  by maintaining the code quality which is something I dint think about. Another thing I didn’t think about is that it can result in false positives because test results are strictly tied to the way the code was written. 

Grey-box testing is a combination of the previous methods. It tests for the interface, functionality, and internal structure. It requires some knowledge of the source code but takes more of a straightforward approach. The technique of this approach is matrix testing. regression testing and pattern testing. The first method involves tracing user requirements to identify missing functionality. The second technique involves testing the software after modifications, and the last technique involves analyzing the design and architecture of the software to find the root cause of a defect. This method isn’t suitable for algorithm testing. 

I think this blog is a good resource to learn about each testing method in depth. I think the lists of techniques were helpful in understanding what each method involves. I left class without truly understanding each method. I think the pros for the methods were straightforward but I didn’t think about the cons so I found that section helpful. Now that I know the techniques of each method I am better equipped to think about how to test software.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

‘Unleash Your Enthusiasm’ Pattern

  The ‘Unleash your enthusiasm’ apprenticeship pattern is one which I would consider very important. The basis of this pattern is that you should not feel the need to suppress your own excitement over crafting software when you are part of a team but you should be mindful of the team’s dynamic and attitude before blindly acting on your own enthusiasm. Essentially it is completely normal for a newcomer in software development to be excited to learn or even to be working within a professional team but the newcomer should always keep in mind that their enthusiasm may be warranted but depending on your colleagues you may want to keep your enthusiasm to yourself in order to not make any negative impression which is important in many different fields not only software development. But even if you judge that your enthusiasm should be kept to yourself based on a team or individual’s dynamic you can still ask questions in order to better yourself.

  I found this pattern particularly useful as I myself along with some fellow computer science majors I know are hesitant to show excitement over software development at times as I do not know if it would be considered as unprofessional. This pattern explained to me that while you do not want to overly express your excitement throughout the learning process it can be helpful to question things and give your ideas even as a beginner when working in a team as the perspective you offer the team is fresh and can lead to helpful solutions or enhance your own learning process and the more you know about what you are developing with your team the more useful you will be when trying to work together on productive and time effective solutions. Being aware that it is for the most part encouraged to share your thoughts, question or ideas when you can at the very least so that you can get an explanation on why an idea would or would not work is something that will help me to be more open to sharing my thoughts with others whether it be colleagues, other students professors etc.

From the blog CS@Worcester – Dylan Brown Computer Science by dylanbrowncs and used with permission of the author. All other rights reserved by the author.

Boundary Software Testing

https://www.testbytes.net/blog/boundary-value-analysis

This past week we began understanding and practicing boundary value testing using simple programs and, of course, simple tests. This blog post discusses that same topic, and gives multiple real life-scenarios and dives deep into the subject. Although it is not a super difficult subject to grasp (in my opinion), I feel this is a good way to reinforce those ideas we learned in class.

This blog by testbytes gives all the information one would need to give them a solid starting ground for boundary value testing. It first explains the concept of the testing model and why it would be done, such as explaining how testers hope to identify errors more proficiently. An example is given, using discount percentages that apply ONLY when purchase totals reach a certain threshold. The lower boundary of the first discount is given, testing if the total is less than $10 (which should mean a 0% discount). Then, the upper boundary of the first discount is checked, with a $10 order receiving an expected 5% discount, so on and so forth. 

Later the blog discusses the different types of boundary value testing as well as their distinctions and the disadvantages and advantages of this type of testing. 

I selected this resource because it gives a quick definition of boundary value testing but then goes into further detail on the subject with good examples and explanations. It is a well thought out post that gave me a more concrete understanding of boundary value testing and edge cases. 

Testbytes on its own is a reliable source for this specific information since it is an entire company dedicated to software testing and quality assurance. Through checking out their other posts, I see they have a variety of testing area blogs including mobile apps, web apps, games, automation, security, and more. 

Overall, this resource is great to refer back to due to its consistent information and easy to understand descriptions. 

As previously mentioned, the content in this blog post by Testbytes is a really solid basis for a good understanding of boundary value testing. I can’t say that it is the highest quality and in-depth education on the subject one can receive, but it’s useful for someone like me who is just getting started. 

Personally, I really enjoyed the material, especially the example given at the start. Although I wasn’t struggling with what we were learning in class, there were times where I had a moment of confusion. This is a quick bit of information that instilled some more confidence in me about this topic. One main thing I learned through this blog is a big disadvantage of BVA is truly how many test cases it may require to check all boundaries/edges, which can lead to a lot of effort on the system and more power/time consumption. I expect to use such advantages and disadvantages when creating test cases for my future individual and work projects.

From the blog CS@Worcester – Josh's Coding Journey by joshuafife and used with permission of the author. All other rights reserved by the author.

looking at junit

In class, we’ve been going over unit testing utilizing the JUnit Java testing framework. In the process of this, I’ve come across a couple of issues regarding classes that increment a “key” attribute, that is an class where an attribute’s value is based on the amount of previously built instances of that class. When testing to see if the attribute is assigned correctly, it isn’t consistent because of the compiler optimizations that JUnit uses. Tests run in an order which is most efficient, so the number of instances of the class may be different depending on the order the compiler goes through the tests (from what I’ve heard, at least).

I wanted to research a bit more into JUnit’s capabilities to see if I could figure out a solution to this problem, although I’ve already submitted the assignment that is relevant to this predicament. I found this blog post / guide to JUnit from Baeldung that goes through additional features from what I’ve learned in class. Perhaps here I can get some ideas as to address this issue.

What I first found interesting in this guide was the @DisplayName and @Disabled annotations. This is actually incredibly helpful, specifically @Disabled. In the assignment, I found that there were tests that weren’t possible because the functionality was not added to the classes we were working with. As a way to address it, I simply did this:

@Test
void test() {
 // Functionality has not been added!
assertTrue(true);
}

This obviously isn’t the best approach to testing. With the @Disabled annotation, you can make a much better implementation of this test.

@Disabled
@Test
void test() {
  assertTrue(method());
}

This is a much better way of testing functionality that hasn’t been added yet.

With regards to the previous problem, I had considered having a @BeforeAll method to create a two instances of a class so that I know exactly what the keys for both instances should be (1 and 2). The problem is that if the constructor for the class isn’t set up correctly, this could result in an error. This isn’t that big of a deal for a small program, but I could imagine it’s not really best practice.

However, JUnit has an Assumptions feature. This means that the desired method will only continue if the assumption is true. With this, we could theoretically run an assumption in a @BeforeAll method, and if it passes the assumption, we create instances with keys that are more easily testable.

@BeforeAll
void setUp() {
  assumeNoException(() -> new Obj());
  Obj obj1 = new Obj();
  Obj obj2 = new Obj();
}

This way, we don’t have to worry about the test class not running all the way through due to an exception, and it’s also fairly elegant. Reminds me of why I like to code.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

Video suitable for JUnit beginners

Hello everyone,

Today I want share a video for JUnit beginners like us.

“Java Unit Testing with JUnit – Tutorial – How to Create And Use Unit Tests”

By Coding with John

Here is the link: https://www.youtube.com/watch?v=vZm0lHciFsQ

I highly recommend everyone to watch this video before doing Homework 1. Because the function we want to implement on Homework1 is almost logically very similar to what he described. The blogger introduces the working principle of JUnit almost from scratch, and shows step by step how to create a Test Method and its working logic.

For example, From 3 to 10 minutes into the video, he introduces the structure of the Test method, including the use of calculation methods and assert statements.

@Test

void twoPlusTwoShouldEqualFour(){

var calculator = new SimpleCalculator();

assertEquals(expected:4, calculator. add(numberA: 2, unmberB: 2)

}

The above code shows us that if unmberA plus numberB equals 4, then we can pass the test. If it is not 4, it will fail to pass. It also shows us the various functions of assert statement. Such as assertEquals, assertNotEquals, assertTrue, assertFalse, assertNull and so on. We gonna use all of them in every scenario.

Next, the video talks about a more complicated testing method. When there are multiple results, we need to test every scenario that may occur. And he talks about some loopholes in the testing logic. For example, in 15 minutes, he gives an example The condition of return C was changed from 80 to 81, and it still passed the test, even though 80 should return B. This does not mean that our test scenario is bad, but we may need more code to describe the test conditions.

However, I recommend this video because it is really suitable for beginners. When I watched this video, many questions were solved. Although these are very basic things, everything you need to learn later is based on these foundations. I believe that learning the knowledge taught in the video will be very beneficial to our future study. I will follow this blogger. The knowledge he talks about is very easy to absorb and understand.

From the blog CS@Worcester – Ty-Blog by Tianyuan Wang and used with permission of the author. All other rights reserved by the author.

Unraveling Boundary Value Testing with Ranorex

In the realm of software development, ensuring the robustness and reliability of applications is paramount. This week, I wanted to dive deeper into boundary value testing after learning about it recently. The resource I found gave me some interesting insights into when and why you would want to use boundary value testing, specifically how it can be applied to black box testing where you dont have access to the source code.

Why This Resource?

I chose this particular resource due to its comprehensive yet digestible explanation of BVA. The technique’s potential to significantly reduce testing time while ensuring thorough coverage of potential edge cases intrigued me. As someone striving to refine my testing strategies, understanding the nuances of BVA seemed like a crucial step. This resource did a good job at explaining it clearly.

Insights and Reflections

The article offered a clear definition of BVA and its importance in black box testing. By focusing on the boundaries of input ranges, BVA aims to uncover errors where they’re most likely to occur. This approach not only streamlines the testing process but also enhances the software’s reliability.

One of the most striking takeaways was the differentiation between BVA and equivalence partitioning. While both are essential in a tester’s arsenal, understanding their distinct roles in identifying potential errors was enlightening.

The discussion on automating BVA, particularly through tools like Ranorex, opened my eyes to the efficiency gains possible with the right technology. This insight has prompted me to consider how automation can be integrated into my future testing endeavors, potentially transforming my approach to ensuring software quality.

Application in Future Practice

The knowledge gleaned from this article will undoubtedly influence my future work. The ability to effectively utilize BVA will allow me to conduct more efficient, targeted testing. Additionally, the exploration of automation tools like Ranorex has inspired me to further investigate how such technologies can be leveraged to enhance testing processes in my projects. This will especially help in scenarios where I cant see the code to make more specific.

Conclusion

The exploration of Boundary Value Analysis through Ranorex’s article has been an enriching experience, reinforcing the importance of strategic testing techniques in software development. I look forward to applying these insights to my practice, confident in the positive impact they will have on my approach to software testing, especially when combined with other techniques.

Resource Link: https://www.ranorex.com/blog/boundary-value-analysis/

From the blog CS@Worcester – Abe's Programming Blog by Abraham Passmore and used with permission of the author. All other rights reserved by the author.

Efficiently testing code

Having your code and programs work is one thing, ensuring the code works effectively and efficiently is another. You can write good code, but it’s nothing if it does not work properly. When I tested my code, I would just run the whole thing to see if it worked, and with different inputs. When it didn’t work, I would strip it apart, run bits and pieces individually until I found the issue. Instead of meticulously scrolling through lines of code, you could use actual testing methods, like Junit testing

JUnit is a software testing framework for Java. Through the use of method calls, annotations, and assertions, you are able to create tests that will call methods and compare the output to the supposed output that you want. That way, you can create multiple tests for possible outputs your methods and program can have. 

In this blog post, Shinji Kanai writes everything they can about JUnit testing, what it is, how it works, the benefits, and how you can get started using JUnit. They say that JUnit can be used for unit testing, functional testing, and integration testing. They also say JUnit can be used for automation testing, to find errors in your code. They go on to show how to write test files, and how to write test code. They touch upon some other topics, like troubleshooting, assertions, debugging and exceptions, and other things. 

I chose this blog post as a good reference article, something to refer back to when you can’t remember what to do. At the same time, it’s also a really good beginner blog post for those who want to get into testing their code. I think Kanai did a good job encapsulating everything, not only summarizing the overall concept of JUnit testing, but at the same time going in depth about how to do things, and going over more topics for those who may find it helpful. Personally, I was unaware of the automation testing side of JUnit, and how it may be able to remove bugs before I even fix them, which is weird because typically you only catch errors and bugs when you run the code. 

Testing is one of the spots in coding where I knew I was lacking in. Before this, I never knew how to test my code effectively, but with this, I feel more confident in my abilities to write better code that will set me apart from others. As I continue along, I can picture myself as a better developer than I did when I first started university.

From the blog CS@Worcester – Cao's Thoughts by antcao and used with permission of the author. All other rights reserved by the author.