Category Archives: CS@Worcester

Test-Driven Development

Hello everyone,

This week’s blog topic would be about Test-Driven Development (TTD) as this is something we recently talked about in class and it didn’t take long for me to understand the importance of it. So Test-Driven Development (TTD) is a software development process that involves writing tests for your code before you write the code. At first this was very confusing but after trying and actually working on it, I actually saw its purpose.  This approach has transformed the development of coding projects and has revolved around testing. While the more traditional way is concentrated around the waterfall model, which is a more linear approach where testing occurs near the end of a one long timeline. TDD makes testing an ongoing process, a reiterative process. It follows simple steps which cycle through the process. You first write the desired test for the desired feature, ensuring the test fails because the feature has not been written first and then you write enough code where it passes the test. This cycle repeats with further improvements and new features until the project is complete. Its core principle is rooted in Agile development, where it emphasizes iterative development with collaborative efforts based on customer feedback and the ability to change. The benefits of TTD is that it enhances collaboration through shared understanding of requirements, it is one of the best ways to detect bugs early on the development cycle of the project and it improves code design immensely. Because it is driven from testing it allows the code to evolve organically, it creates a program having the same format of coding throughout it without even trying too much. Due to its core principles, it also lowers the long term maintenance costs and this can be hugely beneficial for big projects that have a long lifespan. Not only does this blog explain what TTD is, the author also shares its best advice and practices for the developers who are trying to implement it in their work. You first start simple by writing a focused test on the fundamentals features of the project. Then you create more complex tests for specific situations. What I liked about this post and the reason why I chose it was that it has a good balance of theoretical concepts and also actual practical use. I appreciated how the author walked us over through the concepts and then immediately followed up with simple examples. This reinforces what you are trying to learn and through practice you test the concepts that you just learnt.

In conclusion, Test-Driven Development represents a new way in software development, where it highlights and prioritizes the importance of testing and the quality that it brings and you revolve it around it. By integrating testing into every step of development, TTD creates a strong and maintainable software program while supporting modern development practices. The development may be slow at first, but the long term benefits and the high quality in code make up for it.

Source: https://circleci.com/blog/test-driven-development-tdd/

From the blog Elio's Blog by Elio Ngjelo and used with permission of the author. All other rights reserved by the author.

CS443: Cultivating a Culture of DevOps

A snap of the Microsoft Azure learning portal.

Today’s article, “Recommendations for fostering DevOps culture”, comes from the Azure Well-Architected Framework. You can find it here.

Following up on my previous two articles on the failure of Spotify model and an exercise to attempt to salvage some part of said model, I decided to read an article from Microsoft on how to best implement DevOps. Their suggestion? Start with the team culture.

What is it about?

This article lays a whole bunch of best-practices for creating a successful, productive, and well-managed team structure, in accordance with Azure’s Well-Architected Framework.

What did I learn?

That more than anything else, culture is what separates a computer hobbyist from a professional engineer. It’s not enough to simply be individually passionate; you have to be part of a team structure that allows your passion to directly contribute to the success of the team, and for you to be similarly inspired by your own teammates’ contributions.

What was most surprising to me?

That these are “common sense” principles. I think a lot of people, especially in the corporate world, have a particularly hierarchical idea of team structure. I’m sure we’ve all had one boss who’s been far too interested in holding pulse checks, team meetings, and debriefs than they were in actually letting their employees do their work.

Even so, the fact that many of the WAF’s principles not only make sense but make good sense seems to be a credit to how well-thought out they are. Rather than forcing the issue of collaborations through meetings, for example, it says to allow for collaboration to occur in small spaces between workers, so that the process of ideating comes naturally at its own pace.

What are my take-aways?

  • Encourage interrelatedness as opposed to interdependence.
  • Use effective team charting software, i.e. Jira. (See last week’s article.)
  • Embrace continuous improvement, continuous integration, and continuous development wholeheartedly.

Kevin N.

From the blog CS-443 – Kevin D. Nguyen by Kevin Nguyen and used with permission of the author. All other rights reserved by the author.

Learning the Ropes: A Student’s Dive into QA Testing Best Practices

I recently came across the article “Best Practices for Software Quality Assurance Testing” by KMS Technology. As a computer science student eager to understand the practical aspects of software development, this piece offered a clear roadmap of the QA process, breaking it down into four main stages: planning, test design, execution, and reporting.

1. Planning: The article emphasizes the importance of early-stage planning, including resource allocation, timeline estimation, and tool selection. This stage sets the foundation for the entire QA process.​

2. Test Design: Here, the focus is on defining both functional and non-functional requirements. The article suggests determining which tests can be automated and which require manual intervention, ensuring comprehensive coverage of user scenarios.​KMS Technology

3. Test Execution: This phase involves running the designed tests and evaluating any defects or issues that arise. It’s a repetitive process, ensuring that each identified problem is addressed and retested.​

4. Reporting and Maintenance: The final stage is about documenting findings, analyzing results, and ensuring that the software is ready for release. Continuous feedback loops between testers and developers are crucial here.​

Reflecting on these stages, I realize how often, in academic projects, we might overlook structured testing due to time constraints or lack of emphasis. However, this article highlights that integrating testing throughout the development cycle, rather than treating it as a final step, leads to more reliable and efficient software.

One key takeaway for me is the significance of clear communication and documentation. In group projects, miscommunication can lead to redundant work or overlooked bugs. By adopting a structured QA approach, we can mitigate these issues.

In conclusion, this article has provided me with a practical framework for approaching software testing. As I continue my studies and work on more complex projects, I plan to implement these best practices to enhance the quality and reliability of my work.

For those interested in a deeper dive, here’s the full article: Best Practices for Software Quality Assurance Testing.

From the blog Zacharys Computer Science Blog by Zachary Kimball and used with permission of the author. All other rights reserved by the author.

Comparing Enterprise Testing to Consumer Testing

Photo by SevenStorm JUHASZIMRUS on Pexels.com

Hello, Debug Ducker here. Last post I made was discussing the developer prioritization of quality for an enterprise product compared to a consumer product. The topic got me thinking what other things does a developer need to be concern with when it comes to Enterprise software and consumer software. What ideas or planning needed when it comes to the two.

First the differences are clear when it comes to developing and testing these types of software. For Consumer products you are developing for a large majority and prioritize the experience for an individual user. Enterprises are for mostly organization use, like a company. When making either one needs to think what is the needs for the software. Take consumer products such as photo editing, video editing, spreadsheets, etc. Enterprises product are specific to the organization that may need it to fill a specific purpose. That doesn’t mean that software that can help a business is always an enterprise software, some software is not just useful for the individual but for business like spreadsheets programs.

When it comes to developing the two products different mindsets and thoughts are needed. For consumer, you need to get into the consumers head and think what they may look for in the product. Probably not anything to specific to say but they may look for other things such as price and needs. Enterprises are different in the fact that they wouldn’t mind paying more and would want a degree of quality. I found that enterprise products tend to have more care put into them compare to consumer products but that is probably because a business wouldn’t want to upset a partner. Enterprise may want specifics that they hope the developers may be able to fill and it because of that is why software developers would work closely with business. Think of it as a bonus when it comes to working with Enterprise as it may demand more out of the developer but the advantage is if work closely, you may be able to get stuff done meeting their demands.

Testing for Enterprise software is a lot more complex to. I found some details online that list specific types to look out for.

  • Functionality Testing
    • Testing to see if it passes functionality requirements
  • Usability Testing
    • Test for optimal user experience
  • Security Testing
    • Testing for vulnerabilities
  • Performance Testing
    • Test to see how well it performance
  • Integration Testing
    • test to see if different modules and applications and external system work with it
  • Compliance and Regulatory Testing
    • Test to see if it passes legal and specific industry requirements

These are all the types of testing to look out for, when it comes to enterprises. Not to say the same can’t be done for consumers, though it is a lot more important when it comes to enterprise.

Thank you for your time, have a nice day.

The Ultimate Guide to Enterprise Software Testing – Testlio, 3 Jan. 2025, testlio.com/blog/enterprise-software-testing/.

Nasnodkar, Sid. “Enterprise vs Consumer Product Management.” Product School, 9 Jan. 2023, productschool.com/blog/product-fundamentals/enterprise-vs-consumer-product-management.

Shields, Keith. “Enterprise Software Development vs. Regular Software Development.” Custom Software Development and Mobile App Design, Designli LLC, 1 Mar. 2025, designli.co/blog/enterprise-software-development-process.

From the blog Debug Duck by debugducker and used with permission of the author. All other rights reserved by the author.

Static and Dynamic Testing

Recently, I have been learning a lot about testing code, specifically methods such as static and dynamic testing of code. For this blog post, I wanted to research deeper into static and dynamic testing methods, as well as their applications, especially given my limited experience with using them up until now. As for researching this topic, I came across a well made podcast titled, “Static Testing vs Dynamic Testing: What is the Difference?” by CTSS Academy on spotify. I found this podcast to be a perfect choice to aid in my research for the software quality assurance and testing course I am taking this semester, especially because of the speakers prior experience in a very relevant workplace that focused on software testing, and the manner in which they address their video to an intended audience of prospect software testers. 

STATIC TESTING: 

This involves looking at software without ever running it in the first place, much like looking at a blueprint. Static testing has a heavy emphasis on preventing bugs from existing in the first place, much like proofreading instructions that you have been given before investing time and resources into the task that would be unrecoverable. Initial requirement documents are involved here, as well as all other outlines for how the project will go about being constructed in the first place. Source code can also be included here, as well as test cases and scripts needed to run when testing the software, really anything generated when developing the software, except running the code itself. The most basic form of static review could be considered “informal review” or something as simple as a coworker looking over another co-worker’s code and giving feedback. Technical review is more formal and is essentially a peer-review of the technical procedures or specifications taken. Static testing also includes “walkthroughs” or presentations of a project to peers, including step-by-step processes taken when creating the project. Static code review or white-box testing itself could include specifically looking at the code written and making sure that it follows the proper syntax of whatever language it is written in, as well as looking for any obvious security flaws.

DYNAMIC TESTING:

This involves executing a software or code and understanding how it processes information and behaves. Dynamic testing is all about executing code and tracking how it behaves and responds to certain situations, more specifically whether it is performing efficiently as it should, while also using the proper amount of data it should be. If static testing refers to whether or not we are building something the right way, dynamic testing refers to whether or not we are building the right thing. Unit testing is a basic place to start with dynamic testing, as it encompasses individual units of code apart from one another, using unit tests to make sure that one job of that part of the code is functioning properly. Integration testing is one step more complex than unit testing, involving multiple unit tests and the manners in which they interact with each other. System testing refers to testing the entire system of code and how it functions as a single machine, assessing whether it meets all criteria established. Security testing, performance testing, or any other form of testing that would require running the software to figure out are all considered to be dynamic testing as well.

From the blog CS Blogs with Aidan by anoone234 and used with permission of the author. All other rights reserved by the author.

Test Driven Development

After learning about test driven development in class a couple classes ago, I found myself confused as to why this process would prove itself successful in software development so I did some research and found a video on YouTube from a channel called Fireship called, “Test-Driven Development //Fun TDD Introduction with JavaScript“ While, of course, JavaScript is not currently one of the programming languages that I primarily work with, the video proved informative and educational nonetheless. The video begins by emphasizing the importance of the phrase “Red, Green, Refactor” and describes test driven development as a technique where a programmer describes the behavior of the code before proceeding to implementing that code. “Red, Green, Refactor” describes the process of, firstly, writing a failing test, then writing some code to get the test to pass, then going back and optimizing the code in a manner that suits the project requirements. 

Although I have no doubt that the majority of the confusion around this topic comes from inexperience and being in the process of learning, I certainly had questions and confusion around how and why it makes sense to put forth an implementation process for code before even writing a single line. Conceptually it makes sense to lay out the general steps to a project and attempt to anticipate some of the issues that may come up beforehand, but something about testing code that doesn’t exist is a confusing concept for me.

The video talked about different types of testing at the different levels starting with unit testing and working into more complex testing methods. Each testing method is built to cater to a different type of program that is being tested.

Generally, after watching this, it sparked curiosity around what software development would look like without testing and how the continual development of the testing process has allowed for programers to not only improve the way they work but to create more robust and successful outcomes in the results of their code. 

I think it’s interesting looking back at the first three or so years of computer science classes, learning to code in multiple different languages with different use cases and styles and intents and outcomes. Not one class between the two schools I went to and the three years I was in school worked with testing before the higher level senior year classes. This made me wonder why something so important and integral to software development such as testing code was left out of the learning process until very late.

From the blog CS@Worcester – The Struggle of Being a Female Student in CS by Noam Horn and used with permission of the author. All other rights reserved by the author.

Week 12- Test Driven Development

During week 12, we learned about test driven development. Test driven development is a testing method that outlines what functions need to be tested, writes the tests, then writes the code. It is an iterative process that goes through each test one at a time, building on what is already written. 

In class, we practiced the method by building a Mars Rover. We read what the rover should do and made a list of what we needed to test and build. We started with the easiest thing on the list and wrote a test for it. Then, we wrote the minimum amount of code to make the test run. We tweaked the test block, if it needed to be, and then ran the single test. Next, we repeated the process for the next easiest thing on the list. We will repeat the process until all of the tests are written. 

Test driven development focuses on just the tests before the actual code is written. It was a little confusing at first to think about how to write a test before the code, but once it clicked, it was easy. 

I found a website on test driven development and it described the process in a slightly different way. BrowserStack explains test driven development as a “red-green-refactor” cycle. The red phase is when just the test is written and it doesn’t pass because there’s no code. The green phase is when the bare minimum code is written for the test to run and it passes. The refactor phase is tweaking the tests and code so everything runs and passes. The cycle is run through every time a new test is written and repeats until all of the tests are written and pass. 

Test driven development is a very useful method for debugging. Since the tests are being run at every new addition, bugs are found as they appear and do not slip into the cracks as easily. The method also is very efficient and allows the programmer to easily add new functions later on in development. The iterative process is easy to maintain for the programmer, and provides a much larger testing scope than other methods. 

I liked working with test driven development. I think the process is very organized and straightforward. I would definitely use this in the future for projects that require thorough tests and with functions that build on each other. 

Source referenced: https://www.browserstack.com/guide/what-is-test-driven-development

From the blog ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

Write Code that’s Readable. Please.

Let’s talk about something that affects programmers at all levels, on all codebases and projects: readability. Specifically, the balance between code that is fast and resource efficient and code that is easily understood and maintained.

First, readability. It’s a given that any code produced and delivered in any professional capacity is almost certainly not being written by a single developer. Therefore, good developers will write code that others will be able to understand and pick up what the intended behavior of the code is. Commenting, appropriate use of whitespace and indentation, and effective developer documentation can make code easy to work with, for anyone.

Here’s an example of two Python code snippets that both return the summation of a given array, but are structured quite differently:

Code 1:

# Returns summation of given array of numbers

def sum_numbers_readable(nums):
   total = 0    # Initalize starting total
   for num in nums:   # Sum all numbers in array nums
      total += num
   return total   # Return sum

print(sum_numbers_readable([1, 2, 3, 4]))

Code 2:

def sum_numbers_fast(nums):
   return sum(nums)
print(sum_numbers_fast([1, 2, 3, 4]))

See the difference? Both pieces of code do the same thing and achieve the same output. Code 1, however, contains comments describing the intended behavior of the function and the expected parameter type, as well as step-by-step what each line of code does in the function. Blank lines and proper use of whitespace also divide the code into relevant sections. Code 2, on the other hand, has no comments added, poor use of whitespace (or as poor as Python would allow), and uses the sum() function from Python’s native library. While the intended behavior of this function is pretty clear, this may not always be the case.

Take a more extreme example, just for the fun of it: go write or refactor some piece of Java code to be all on the same line. It’ll compile and run fine. But then go ask someone else to take the time to review your code, and see what their reaction is.

On the other hand, we have code efficiency and speed, generally measured in runtime or operations per second. Comparing our two code snippets using a tool like perfpy.com to see performance, and we find that the second, less readable code executes faster with a higher operations per second. (931 nanoseconds vs. 981 nanoseconds and 1.07 op/sec vs. 1.02 op/sec. Not a huge difference at all, but this scales with program complexity).

This gives us some perspective on the balance between performance and readability/maintainability. It also helps to keep in mind that performance and readability are both relative.

Looking back at our two Python snippets, most developers with experience would opt for the second design style. It’s faster, but also easily readable provided that you understand how Python methods work. However, people just learning programming would probably opt for the more cumbersome but readable first design. With regards to performance, the difference in runtime could be a nonissue or it could be a catastrophic slowdown. It depends entirely on the scope of the product and the needs of the development team.

As a final rule of thumb, the best developers will balance readability and efficiency as best they can, while above all else remaining consistent with their style. When looking at possible code optimizations, consider how the balance between readability and performance could shift. The phase “good enough” tends to have a negative connotation, but if your code is readable enough that other team members can work on it, and fast enough that it satisfies the product requirements, “good enough” is perfect.

Refrences:
Lazarev-Zubov, Nikita. “Code Readability vs. Performance: Shipbook Blog.” Shipbook Blog RSS, 16 June 2022, blog.shipbook.io/code-readability-vs-performance.

From the blog Griffin Butler Computer Science Blog by Griffin Butler and used with permission of the author. All other rights reserved by the author.

Understanding Smoke Testing in Software Development

In software development, a build has to be stable before further more comprehensive testing in a project so that the project is successful. One of the ways of guaranteeing this is smoke testing, which is otherwise known as Build Verification Testing or Build Acceptance Testing. Smoke testing is an early check-point to verify that the major features of the software are functioning as desired before other more comprehensive testing is done.

What is Smoke Testing?
Smoke testing is a form of software testing that involves executing a quick and superficial test of the most crucial features of an application to determine whether the build is stable enough for further testing. It is a minimum set of tests created to verify if the core features of the application are functioning. Smoke tests are generally executed once a new build is promoted to a quality assurance environment, and they act as an early warning system of whether the application is ready for further testing or requires correction immediately.

Important Features of Smoke Testing

-Level of Testing: Smoke tests are interested in the most important and basic features of the software, without exploring each and every functionality.
-Automation: Automated smoke testing is a common routine, especially in the case of time limitations, to perform quick, repeatable tests.
-Frequency: Smoke testing is normally run after every build or significant change in code in order to allow early identification of major issues.
-Time Management: The testing itself is quick in nature, so it is a valuable time-saver by catching critical issues early.
-Environment: Smoke testing is typically performed in an environment that mimics the production environment so that test results are as realistic as possible.

Goal of Smoke Testing

The primary objectives of smoke testing are:

-Resource Optimization: Don’t waste resources and time on testing if core functionalities are broken.
-Early Detection of Issues: Identify any significant issues early so that they can be fixed at a quicker pace.
-Refined Decision-Making: Present an open decision schema on whether or not the build is ready to go to thorough, detailed testing.
-Continuous Integration: Make every new build meet basic quality standards before it is added to the master codebase.
-Pragmatic Communication: Give rapid feedback to development teams, allowing them to communicate clearly about build stability.

Types of Smoke Testing
There are several types of smoke tests based on methodology chosen and setting where it is put to practice:
-Manual Testing: Test cases are written and executed manually for each build by testers.
-Automated Testing: Automation tools make the process work by itself best used in situations of tight deadline projects.
-Hybrid Testing: Combines a mixture of automated as well as manual tests for capitalizing on both the pros of each methodology.
Daily Smoke Testing: Conducted on a daily basis, especially in projects with frequent builds and continuous integration.
Acceptance Smoke Testing: Specifically focused on verifying whether the build meets the key acceptance criteria defined by stakeholders.
UI Smoke Testing: Tests only the user interface features of an application to verify whether basic interactions are working.

Applying Smoke Testing at Various Levels
Smoke testing can be applied at various levels of software testing:
Acceptance Testing Level: Ensures that the build meets minimum acceptance criteria established by the stakeholders or client.
System Testing Level: Ensures that the system as a whole behaves as expected when all modules work together.
Integration Testing Level: Ensures that modules that have been integrated work and communicate as expected when combined.

Advantages of Smoke Testing
Smoke testing possesses several advantages, including:
Quick Execution: It is easy and quick to run, and hence ideal for frequent builds.
Early Detection: It helps in defect detection in the initial stage, preventing wasting money on faulty builds.
Improved Quality of Software: By detecting the issues at the initial stage, smoke testing allows for improved software quality.
Risk of Failure is Minimized: Detecting core faults in earlier phases minimizes failure risk at subsequent testing phases.
Time and Effort Conservation: Time as well as effort is conserved as it prevents futile testing within unstable builds.

Disadvantages of Smoke Testing
Although smoke testing is useful in many respects, it has some disadvantages too:
Limited Coverage: It checks only the most critical functions and doesn’t cover other potential issues.
Manual Testing Drawbacks: Manually, it could be time-consuming, especially for larger projects.
Inadequate for Negative Tests: Smoke testing typically doesn’t involve negative testing or invalid input scenarios.
Minimal Test Cases: Since it only checks the basic functionality, it may fail to identify all possible issues.

Conclusion
In conclusion, smoke testing is an important practice at the early stages of software development. It decides whether a build is stable enough to go for further testing, saving time and resources. By identifying major issues early in the development stage, it facilitates an efficient and productive software testing process. However, it should be remembered that smoke testing is not exhaustive and has to be supported by other forms of testing in order to ensure complete quality assurance. 

Personal Reflection

Looking at the concept of smoke testing, I view the importance of catching issues early in the software development process.
It’s easy to get swept up in the excitement of rolling out new features and fully testing them, but if the foundation is unstable, all the subsequent tests and optimizations can be pointless. Smoke testing, in this sense, serves as a safety net, getting the critical functions running before delving further into more rigorous tests. I think the idea of early defect detection resonates with my own working style.

As I like to fix small issues as they arise rather than letting them escalate into big problems, smoke testing allows development teams to solve “show-stoppers” early on, preventing wasted time, effort, and resources in the future. Though it does not pick up everything, its simplicity and the fact that it executes fast can save developers from wasted time spent on testing a defective product, thus ending up with a smooth and efficient workflow. The process, especially in a scenario where there are frequent new builds being rolled out, seems imperative to maintain a rock-solid and healthy product.
The benefits of early problem detection not only make software better, but also stimulate a positive feedback loop of constant improvement between the development team. 

From the blog CS@Worcester – Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospecive

A lot of things worked pretty well this sprint compared to the last. Communication, effort, work, progress, collaboration, planning, and more were all present and at a great level compared to the last sprint. We still had our own section that we mainly worked in but there was plenty of collaboration for tackling a difficult issue, researching an issue, or just divvying up work should someone have free time. I think that our Sprint 2 goal was very concrete and our issues that we made at the start of the sprint as well as throughout led to a smooth sailing to the finish line.

I wouldn’t say that there was anything that didn’t work well but rather things that could be worked on. Communication could always be improved, and even though it’s a night and day difference between Sprint 1, there were some points in which it would’ve been nice to hear from certain team members what was going on or why they weren’t present. This may be more of just my personal feelings but there is one member who handles almost all Gitlab organization and use and takes notes of our meetings, and I’m not sure if it’s for their sake or for the team’s sake, but I feel that they could ask the rest of the team to help with any of these. They are the most adept at using Gitlab and I assume wants to stay on top of things and as organized as possible but I’m certain other team members could help out even if a little. 

To improve as a team, communicating when a team member can’t make the meeting and making good progress and decisions without certain team members present would be great. I think we divvy up our issues well but other work that aren’t exactly issues could be divided up between members especially if it’s rather menial.

I think I could’ve gathered a better idea of our group’s work and flow and what other team members are working on. For the Sprint 2 Review, I prepared to share what I had worked on but was wholly unprepared to share what other members have worked on and present our work. 

The apprenticeship pattern I felt was most relevant to my experiences was “Confront Your Ignorance” from Chapter 2. This pattern states that “There are tools and techniques that you need to master, but you do not know how to begin… and there is an expectation that you already have this knowledge.” The solution to this pattern states to “Pick one skill, tool, or technique and actively fill the gaps in your knowledge about it.” Although this pattern doesn’t exactly describe my experience during the sprint, it felt the closest out of all the other patterns. Rather than tools and techniques, I would say that I need to better understand our group’s work and plans as well as individual members’ work. To handle this, I could go through our Gitlab issues seeing what has been accomplished and who is assigned to what, I could explore our repositories to get a better understanding of the individual pieces of the project, and I could simply ask my team members about what they have worked on, what they plan to work on, and their idea of our group’s work. Taking notes would’ve been a great option as well due to how much information might be shared during our meetings.

From the blog CS@Worcester – Kyler's Blog by kylerlai and used with permission of the author. All other rights reserved by the author.