Category Archives: Week 6

sustainable motivations

I hear this a lot: getting into tech, and specifically software engineering, because of money. It certainly seems to be the case that many people since maybe the 2010s have worked on Computer Science degrees solely because of the income and luxury of it, rather than an actual enjoyment of it.

I always found myself good with technology. When I took my first Computer Science course in high school, I always ended up completing assignments in 10-15 minutes when the allotted time was around two class periods. This is what I’m good at, as far as I can tell, but that’s not necessarily enough for me to make a career off of. If I don’t care about it, then I’m stagnating. If my motivation is simply that I’m good at it, it doesn’t necessarily inspire growth.

These are two examples of unsustainable motivations. The idea is that we can get trapped in the motivations we set up for what we do, and they lock us into a (most of the time, negative) mindset. Instead of enjoying my day to day work, I’ll see it as just work, and want the day to be over as soon as it starts, without actually growing as a person or in my skill level.

While the solution given tries to offer a practical approach (writing down your motivations and how much they factor in to your decision to stay a software developer), I don’t necessarily think this is a strong solution. I’m inclined to say that you have to have a more inquisitive investigation to why you care about this career, and what is really motivating you. Ultimately, you yourself create this motivation, it’s not out there waiting to be found. You can either construct for yourself an enjoyment of the journey, or you can only care about the outcomes.

From this, I would say solely caring about outcomes is unhealthy, and that mindset lends itself to motivations like reputation and money. As such, you have to ultimately figure out how to care about the journey if you actually want to have some level of enjoyment in your career. Otherwise every day genuinely will be a soulless repetition of the last.

There’s a reason why Camus wrote that one must imagine Sisyphus happy. If Sisyphus were to redefine the pushing of that boulder as a journey, and the rolling down of that boulder as a satisfying conclusion and new beginning, then this punishment is not as severe. Of course, this requires a lot of mental effort, but I think it’s necessary to live a life that you can actually be happy with.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

Exposing Your Ignorance

This week the pattern I decided to write about is from chapter 2. The pattern titled “Expose Your Ignorance” discusses something I’ve had to experience recently: letting those you’re working with fill the gaps in your knowledge. This section opens up with this quote by Jake Scruggs in “My Apprenticeship at Object Mentor”. The quote is “Tomorrow I need to look stupider and feel better about it. This staying quiet and trying to guess what’s going on isn’t working so well.” Opening this topic with that quote is impactful because I think a lot of people feel shame and would call themselves stupid for simple gaps in their knowledge when others have high expectations for them. It’s important to acknowledge that it’s okay to not know everything and be transparent about it instead of struggling alone while deadlines approach.

When I was working on a website with a group of other developers, we were all transparent with each other about our knowledge gaps for the tech stack we had to work with. This allowed us to play off of each other’s strengths and weaknesses. Some of us were more knowledgeable about the front end while others were more knowledgeable about what was required for the backend. I was more unfamiliar with what the front end required so I worked on the backend team. When we started working on the project, I was transparent about my lack of knowledge about JavaScript, routes, controllers, and HTTP requests. My team leader spent some time going over the material with me and provided some resources to research on my own. I then looked for more resources to learn. We also made an effort for everyone to learn a bit more about both the front end and the back end.

 In the text, it said, “Conceding to unspoken pressures and telling people what they want to hear is not a good way to build strong relationships.” I agree with this because our transparency made sure we were able to help each other grow and in turn strengthen our relationships with each other. Your reputation will be built off of your willingness to learn. There wasn’t any part that I could disagree with. When you’re honest about your ignorance you will end up picking up knowledge about a variety of technologies which will make it easier for you to adapt down the road.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

CS448 Software Development Capstone – Apprenticeship Patterns: “Concrete Skills”

I would like to continue my reflection on my fundamental software development skills and how to reinforce them in this week’s blog post. The “Concrete Skills” pattern in the “Emptying The Cup” chapter of “Apprenticeship Patterns” by David H. Hoover and Adewale Oshineye describes the practice of building and maintaining your concrete skill set to make yourself a better choice for a professional role. Knowledge on how to write build files, familiarity with the standard libraries for your chosen programming language, basic web design, and JavaScript are some examples of concrete skills given by the authors. Possession of these concrete skills are what allow you to stand out as a candidate for a developer position. The authors recommend constructing “toy implementations” to demonstrate your understanding of these concrete skills in an interview.

I wanted to read and reflect on this pattern because recently I was challenged with a programming problem where I needed to iterate through the nodes of a linked list. I remember learning about the linked list as a data structure in a previous course, and now in the elective course that I’m taking, we’ve been tasked with implementing a linked list as well as performing operations on it. The implementation of each linked list node as an object with a ‘head’ containing data, and a ‘tail’ that functioned as a pointer to the next node in the list was familiar to me. Despite my previous experience learning about the linked list, I still spent a long time implementing a function that would do something and then iterate through the linked list. I knew I had to repeatedly set the head of the list to its tail within a while loop until the data ‘head’ of the list was empty. It was only with the help of my classmates that I rediscovered how to express that implementation in code and properly operate on each element in a linked list, and then exit the while loop once the ‘head’ member variable of the node was empty. I realized that while I believed that I understood basic data structures like linked lists, trees, and queues conceptually, I struggled with implementing those design concepts when put to the test.

The pattern’s recommendation to demonstrate your concrete skills through small-scale projects has me reflecting on my goals for a project I started a few weeks ago and have only spent a few collective hours on since. I’ve been wanting to practice my design patterns, so I started a Java project called “MonsterFactory” as an exercise. I have a collection of Java classes like Zombie, Werewolf, or Vampire that all inherit from a Monster superclass and share some member variables and methods. I want to implement a MonsterFactory class that follows the Factory software design pattern that could instantiate whichever Monster subclass we could want in any given situation. This project could serve as a functional example of my understanding of software design patterns, and I could also expand upon it to take advantage of the libraries offered in Java. I’m primarily working with strings and integers as data types in this project but learning how to work with images as a data type to accompany each Monster could be an entirely achievable goal for me to add to this project.

From the blog CS@Worcester – Michael's Programming Blog by mikesprogrammingblog and used with permission of the author. All other rights reserved by the author.

Craft Over Art: A Journey of Mastery

Summary of Craft Over Art:

The “Craft over Art” pattern delves into the distinction between craftsmanship and artistic expression within the realm of software development. It emphasizes the importance of prioritizing craftsmanship, which entails mastering the fundamental skills and techniques required to create high-quality software, over the pursuit of artistic flair.

My reaction:

Upon encountering the “Craft over Art” pattern, I found myself reflecting deeply on the essence of software development as a craft. What struck me most was the notion that craftsmanship transcends mere creativity or innovation—it embodies a dedication to continuous learning, refinement, and adherence to best practices. This perspective has profoundly influenced my perception of my intended profession as a software developer.

Initially, I was drawn to software development by the allure of innovation and the opportunity to unleash my creativity through code. However, this pattern prompted me to reconsider the significance of honing my technical skills and adopting disciplined practices. It made me realize that while creativity has its place in software development, it is craftsmanship that truly underpins the creation of reliable, maintainable, and scalable software solutions.

Moreover, the pattern’s emphasis on mastery resonated with me on a personal level. It sparked a realization that becoming a proficient software developer requires more than just technical prowess—it demands a commitment to continuous improvement and a willingness to embrace challenges as opportunities for growth.

While I wholeheartedly agree with the premise of prioritizing craftsmanship over artistry in software development, I acknowledge that striking a balance between the two is essential. Creativity and innovation undoubtedly drive progress in our field, but without a solid foundation of craftsmanship, they risk being mere flashes in the pan. Therefore, I believe that the key lies in integrating artistic expression with the principles of craftsmanship, leveraging creativity to enhance the quality and elegance of our code.

“Craft over Art” pattern has been instrumental in shaping my understanding of software development as a craft. It has inspired me to prioritize mastery, discipline, and continuous learning in my journey as a software developer, ultimately guiding me towards the path of excellence in my profession.

From the blog CS@Worcester – Hieu Tran Blog by Trung Hiếu and used with permission of the author. All other rights reserved by the author.

The White Belt

In this week’s blog post, I will be discussing the “The White Belt” pattern discussed in chapter 2 of “Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman” by Dave Hoover and Adewale Oshineye. This week, I chose this topic for my blog post because I think being able to set aside what you have learned in order to learn more is a difficult but important skill.

A quote early into this section that I can personally relate to discusses how your confidence in what you have already learned can cause you to have a harder time learning more. “You are struggling to learn new things, and it seems somehow harder than it was before to acquire new skills. The pace of your self-education seems to be slowing down despite your best efforts. You fear that your personal development may have stalled.” I have been in this position myself more than once. One approach the chapter mentions is called the not knowing stance.

The not-knowing stance, as mentioned in the chapter, is an approach to understanding that you do not and can not currently understand the entirety of what you are trying to accomplish. “Part of the approach Dave took as a family therapist included maintaining a not-knowing stance. Families in difficult circumstances were experiencing a unique reality that, despite his training, Dave knew he could not fully appreciate. While he acknowledged his skills at facilitating constructive questions and conversations, Dave was taught to refrain from believing that he had any expert knowledge into the realities that these families experienced. While this may seem counterintuitive, in reality, it fosters an attitude of respect and curiosity that opens up unforeseen possibilities and solutions. Rather than pushing solutions down on the family, Dave’s not knowing stance helped him to collaborate with the family to find solutions as a team.” As shown in this quote, embracing the not-knowing stance, can help you have a more open mind in trying to solve problems, or learn new things in different ways. Embracing the not-knowing stance is not only incredibly helpful in being able to learn new things, but can also help interpersonally, as shown by the quote.

From the blog CS@Worcester – P. McManus Worcester State CS Blog by patrickmcmanus1 and used with permission of the author. All other rights reserved by the author.

Equivalence Class Testing

Among various testing techniques, equivalence class testing stands out as an efficient method for cutting down the number of test cases required while maintaining thorough test coverage. 

Equivalence class testing is based on the principle that inputs can be grouped into equivalence classes that exhibit similar behavior. By selecting representative test cases from these classes, testers can efficiently cover various scenarios without testing every possible input value individually. This technique is the best of both worlds, optimizing test case selection all while maintaining thorough test coverage; as those from ProfessionalQA.com put it, both the quality of test cases as well as testing as a whole is enhanced “by removing the vast amount of redundancy and gaps that appear in the boundary value testing.”

Equivalence class testing has four variations, each of which have their own benefits, downsides, and uses. They are determined using the combinations of two factors, the number of test cases and whether only valid values are tested or both valid and invalid are tested Thus, in terms of equivalence classes, we have weak-normal, strong-normal, weak-robust, and strong-robust. Weak-normal has few but effective tests and only covers the valid equivalence classes, strong-normal covers every valid equivalence class, weak-robust is like weak-normal but includes an invalid equivalence class(es) as well, and strong-robust covers every valid and invalid equivalence class. One thing to note about strong-robust equivalence class testing is that there is some redundancy when it comes to testing the invalid equivalence classes.

Equivalence class testing was a bit hard to pick up initially but it really clicked thanks to some visual aid, that being the graphs of the variations of equivalence class testing. With this visual, I was able to understand how effective equivalence class testing is and why some will want to use it. It allows testers to “focus on smaller data sets, which increases the probability to uncovering more defects in the software product” and may reduce the possibility of error on the tester’s part. With other testing techniques that are more difficult or time-consuming when it comes to larger data sets, equivalence class testing is a great alternative.

https://www.professionalqa.com/equivalence-class-testing

From the blog CS@Worcester – Kyler's Blog by kylerlai and used with permission of the author. All other rights reserved by the author.

Interesting Features of JUnit 5

Since beginning to work with code in CS443 – Software Quality Assurance and Testing, we’ve used JUnit framework for designing and running our test cases. So I decided to search for a blog post discussing some interesting features that I may not have come across yet, but could be useful and landed upon Exploring the Exciting New Features of JUnit 5. This post is from December 2023, so it should be relatively up-to-date and I recall a conversation with Dr. Wurst at one point where he briefly mentioned considering switching to a newer version of JUnit for some attractive features – hopefully we can delve into some of these.

Several feature additions come with JUnit 5 and specifically version 5.4. One that immediately stood out to me was support for more/new annotations and assertions like @Nested. We’ve looked at some basic annotations like @BeforeEach and @AfterAll in class but the idea of nesting tests is newer – however it makes perfect sense from a practical perspective. Depending on the outcome of an initial test, testers may want to run further tests on one branch or two different tests depending on which branch is followed. Proper annotating likely helps the compiler recognize the nested nature of the tests preceded and manage potentially complex webs of nested tests most efficiently.

There’s also improvements to the assertEquals() functions and overall flexibility through enhancements to API and insertion features for handling Lambda functions. This goes hand in hand with a new feature of JUnit 5 – the ability for tests to be dynamically generated during test runtime and implemented (if needed) using a Factory class/method. Last semester in Software Construction Design and Architecture, we learned about the Factory architecture and methodology so it was cool to see it applied to enhance features in professional software. 

Another cool feature of JUnit 5 which represents a considerable change from JUnit 4 is the transition to a modular structure, meaning there is a separate test runner and classes which operate independently from the main program. I could imagine that this separation isolates any issues that may arise during testing and protect the main program, while also preventing any unintended interactions with the main from interacting with properly designed tests.

JUnit 5 offers some major features and enhancements over the previous versions with the ability to tag and implement nested tests, improved Lambda function support and Factory method for dynamic test creation and implementation. Considering these, I can see how JUnit could be effective for designing automated test runs. I’m looking forward to implementing more of these features in our class and homework activities for CS443, and trying some extra tests and methods that I read about in this.

Source: 

https://blog.machinet.net/post/exploring-the-exciting-new-features-of-j-unit-5

From the blog CS@Worcester – Tech. Worth Talking About by jelbirt and used with permission of the author. All other rights reserved by the author.

Pairwise and Combinatorial Testing

The article “Combinatorial Testing” focuses on the insights of software testing methods. This article explores the evolution of combinatorial testing, talking about advancements in algorithm performance and constraint representation. The article also talks about the importance in detecting interaction failures within software systems. The article also demonstrates the effectiveness of t-way combinations fault detection across various domains. The article “Pairwise Testing” talks about pair testing as a permutation and combination technique aimed at testing each pair of input parameters to ensure that the system if functioning properly across all possible combinations. The article also addresses the many benefits of pairwise testing and it’s role in reducing test execution time and cost while maintaining test coverage. Also, it talks about the challenges associated with pairwise testing, including the limitations in detecting interactions beyond pairwise combinations.

Pairwise Testing

pairwise testing is a software testing method that aims to comprehensively validate the behavior of a system by testing all possible pairs of input parameters. This method is mainly used when many of the defects in software systems are triggered by interactions between pairs of input parameters, rather than by individual parameters in isolation.

Benefits & Challenges

some benefits that pairwise offers is, efficiency: by testing the combinations of two input parameters at a time. This reduce’s the number of test cases required compared to exhaustive testing. pairwise testing also offers effective defect detection: by effectively finding defects that are triggered by interactions between pairs of input parameters, pairwise testing also helps to identify certain scenarios by systematically exploring pairs of parameters. Some challenges that pairwise testing may face is when it comes to parameter selection. Selecting the right parameters is crucial and requires a lot of knowledge of the software and it’s potential interaction scenarios. If the wrong parameter is selected this can lead to incomplete test coverage and missed defects.

Combinatorial Testing

Combinatorial testing is a software testing technique that focuses on efficiently testing the interactions between different input parameters of a system. This test method involves generating a set of test cases that include various combinations of input values / specific parameter values.

Benefits & Challenges

Some benefits of combinational testing include improved software quality: by being able to identify and address the interaction failures early in the development process. This test method tests various combinations of input parameters, which can help find defects that could impact the systems performance. A challenge that combinational testing may face is the scalability. Combinatorial testing is effective for small to medium sized systems and when scaling it to large and complex systems with a high number of input parameters and values, you may run into some problems.

Why did I pick this Article?

I pick these two article that talk about pairwise and combinatorial testing because both these test methods stand at the forefront of software test methods. The article’s goes into details about how both of these test methods offer an efficient way to ensure comprehensive test coverage while minimizing redundancy. Both of these articles have taught me a lot about pairwise and combinational testing.

Reflection

After reading both of these articles, I have gained a greater understanding of both these test cases. With the new found knowledge, I aspire to apply pairwise and combinatorial testing techniques in my future projects. Both these test methods offer practical solutions to common testing challenges, and by incorporating them into my future endeavors I aim to contribute to the development of reliable software systems.

Article link is here: https://www.sciencedirect.com/science/article/abs/pii/S0065245815000352

https://testsigma.com/blog/pairwise-testing/

From the blog CS@Worcester – In's and Out's of Software Testing by Jaylon Brodie and used with permission of the author. All other rights reserved by the author.

Decision Table-Based Testing, a Game Changer for Software Bugs.

Today, the next meal on my menu of headaches is Decision Table-Based Testing, which as the name suggests is a table of tests to ensure that your software is working as intended and not printing “Hello World!” when you try to generate your salary. I may be downplaying it somewhat but the truth is that it might be one of the best weapons against bugs in software development.

Photo by Yan Krukau on Pexels.com

This approach is all about making sure your app or software doesn’t throw a tantrum under different situations by planning out every possible scenario in a neat, organized table. It’s a bit like planning a massive party and making sure you’ve thought of everything, so nothing goes wrong (well, almost nothing).

Imagine you’ve got a bunch of switches and dials that can be turned on, off, or dialed up to eleven. Decision tables help you figure out what happens to your software when you mess with those controls in every possible way. It’s a clear, visual way to lay out the “if this, then that” of your app’s behavior. This is very handy because it turns the headache of thinking through a million combinations of inputs and outcomes into something manageable.

What’s awesome about this is how it simplifies the chaos. You get this big-picture view of how different inputs play together and affect your software, making it easier to spot where things might go wrong. It’s like having a map when you’re in a maze, showing you all the paths you can take.

Starting to use Decision Table-Based Testing is pretty straightforward. You write down all the things that could change or affect your software (conditions) and what should happen in response (actions). Then, you mix and match these conditions to cover all your bases. This method is a fantastic way to find those sneaky bugs that only show up under specific conditions and to make sure your software is rock solid.

“But Ano, what if you update the app and add new stuff?”. As your app grows and gets more features, you can just update your decision table to keep up. It’s a flexible, scalable way to keep your testing game strong, no matter how advanced or complex your software gets.

Sure, it might sound a bit daunting, especially with super complicated apps. But, with the right tools and a bit of practice, it becomes a lot less scary. It’s about making the effort now to save a ton of headaches later when you’re not chasing down weird bugs half an hour before a project is due.

In the end, Decision Table-Based Testing is all about making your life easier and your software better. It’s a way to tackle the complexity head-on, with a clear plan and a cool head. And who doesn’t want that? So, if you’re in the business of making software, give it a whirl. It might just be the thing you need to keep those bug boogeymen at bay.

Till next time,

Ano out.

References:

https://testsigma.com/blog/decision-table-testing

https://www.guru99.com/decision-table-testing.html

From the blog CS@Worcester – Anairdo's WSU Computer Science Blog by anairdoduri and used with permission of the author. All other rights reserved by the author.

spec based testing

As we move onto more code-based testing in class, I wanted to review some of the black box testing techniques we’ve gone over in class, especially since the most recent homework was somewhat confusing for me.

I’ll start off with boundary value testing. According to a blog post on SDET Unicorns, boundary value testing tests valid inputs in the domain (minimum, maximum, and one below and above them respectively), invalid inputs that are close to the domain, and any special inputs, such as empty strings or null pointers. This technique’s main use is testing boundaries, that is, it’s mostly concerned with whether or not invalid inputs are properly dealt with, and valid inputs are processed as valid. The drawback, as we discussed in class, is that it doesn’t really describe the different cases of valid inputs if there is branching taking place.

Equivalence class testing addresses this issue. From the same post above, equivalence class tests (or partitions, in the author’s words) divide inputs into, well, equivalence classes, or groups of input where behavior is expected to be the same. This also means that there are multiple groups of valid inputs, meaning this approach can effectively test different cases of valid inputs based on the specifications, rather than just testing if valid and invalid inputs behave as expected.

The reason why I wanted to look at these two specifically is because they are vital to understanding the decision table-based approach. I’m fairly confident in this approach because I found it fun to work with in class. It’s essentially a visualization and simplification of both boundary value and equivalence class testing, mostly equivalence class testing though, at least in my interpretation. The reason why I find it easier to work with decision tables is because they are much more efficient with regards to the space you use, even if the amount of mental work you have to do is larger.

It’s interesting because I found, in the homework at least, writing out test cases and the like for the non-table based approaches was somewhat frustrating because you have to consider each case even if they do the same thing, and write them out. With decision tables, you can optimize values into ‘dont cares,’ meaning that if the output is solely dependent on one of multiple inputs in this specific equivalence class, so you don’t have to care about what class the other values are. I really enjoy how this cleans up the entire process of black box testing. That being said, I understand that this can become very difficult to test as the complexity of a project increases.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.