Category Archives: CS@Worcester

Decoding Software Technical Reviews: A Practical Guide for Developers

Hello again and welcome back to this week’s update. Today, I want to discuss something very critical in software development but perhaps many find it quite intimidating (me included) – Software Technical Review. For any developer, whether they are a professional coder or just a beginner, getting familiar with this process will completely change the game. So, grab your coffee or another favorite brew, and let’s break STR down.

Firstly, what is it?

To describe it in simple terms, a Software Technical Review is a sort of code brainstorming. This procedure requires a team of developers – sometimes, other stakeholders, too, – to meet and look through the code and design of the project, as well as any other technical part. It is a kind of peer review of your code, where every team member, checks each other’s homework to make sure everything is on course before proceeding further.

Secondly, why is it important?

Well, STR helps identify bugs immediately, ensures software complies with quality requirements, and helps consultants stay up-to-date with a project’s aims. All in all, it’s like double-checking an essay before sending it: it would be quite irritating to receive a failing grade for a paper due to small mistakes that could have been rectified. Imagine the irritation when even small details can have a severe aftermath.

What actually happens in the review?

The team analyzes the code to detect blunders, potential security risks, and quality as a whole.
They examine the design document to see if the planned architecture makes sense, and if it is achievable.
The team also looks at the performance of the development and its ability to satisfy the requirements of the proposed user.
The members always come up with their ideas of what to upgrade or alter.
Who is involved, and not just developers?

Finally, here is the STR’s cast:

Project managers, who ensure the project is on the right path to its goals and timeline.
Quality assurance engineers are responsible for the quality and ensuring that no rough edges of under-functionality come out.
Other developers may see an error that was made in the code-reviewing phase by the coder.

Some review suggestions?

You should know what you have done without stammering, have an open mind to revision, know it is for everyone’s best, and try to avoid subjective opinions – stick to points on making your code better, not yourself worse.

As they say, two minds are better than one so try to take advantage of the other minds worrying about the same project as you in order to better yourself and your programming skills.

Till next time,

Ano out.

References

https://www.geeksforgeeks.org/software-technical-reviews-in-software-testing

https://www.softwaretestinggenius.com/understanding-software-technical-reviews-strs/

From the blog CS@Worcester – Anairdo's WSU Computer Science Blog by anairdoduri and used with permission of the author. All other rights reserved by the author.

Exploring Continuous Integration: A Pillar of Modern Software Quality Assurance (Week-14)

Exploring Continuous Integration: A Pillar of Modern Software Quality Assurance (Week-14)

In the rapidly evolving world of software development, Continuous Integration (CI) has emerged as a key practice that significantly enhances the quality and efficiency of products. This week, we explore the critical role of CI in modern software development, highlighting its benefits, best practices, and essential tools.

What is Continuous Integration?

Continuous Integration is a development strategy where developers frequently merge code changes into a central repository, followed by automatic builds and tests. The main aim is to provide quick feedback so that if a problem arises, it can be addressed at the earliest opportunity.

Benefits of Continuous Integration

1. Early Bug Detection: Frequent integration tests help identify defects early, reducing the costs and efforts needed for later fixes.

2. Reduced Integration Issues: Regular merging prevents the integration hell typically associated with the ‘big bang’ approach at the end of projects.

3. Enhanced Quality Assurance: Continuous testing ensures that quality is assessed and maintained throughout the development process.

4. Faster Releases: CI allows more frequent releases, making it easier to respond to market conditions and customer needs promptly.

Implementing Continuous Integration

1. Version Control: A robust system like Git is essential for handling changes and facilitating seamless integrations.

2. Build Automation: Tools such as Jenkins, Travis CI, and CircleCI automate building, testing, and deployment tasks.

3. Quality Metrics: Code coverage and static analysis help maintain high standards of code quality.

4. Automated Testing: A suite of tests, including unit, integration, and functional tests, are crucial for immediate feedback on the system’s health.

5. Infrastructure as Code: Tools like Docker and Kubernetes ensure consistent environments from development to production.

Best Practices for Continuous Integration

1. Maintain a Single Source Repository: Centralizing code in one repository ensures consistency and simplicity.

2. Automate Everything: From compiling, testing to packaging, automation speeds up the development process and reduces human error.

3. Ensure Builds Are Self-Testing: Builds should include a comprehensive test suite, and only successful tests should lead to successful build completions.

4. Prioritize Fixing Broken Builds: Addressing failures in the build/test process immediately keeps the system stable and functional.

5. Optimize Build Speed: A quick build process promotes more frequent code integration and feedback.

Tools for Continuous Integration

  • Jenkins: Manages and automates software development processes.
  • Travis CI: Hosted integration service used for project building and testing at GitHub.
  • CircleCI: Integrates with several cloud environments for CI/CD practices.

Conclusion

Continuous Integration is essential, not just as a technical practice but as part of the culture within high-performing teams. It fosters a disciplined development environment conducive to producing high-quality software efficiently and effectively. For those seeking to delve deeper, Continuous Integration: Improving Software Quality and Reducing Risk by Paul M. Duvall et al. is an excellent resource.

From the blog CS@Worcester – Kadriu's Blog by Arber Kadriu and used with permission of the author. All other rights reserved by the author.

Integration Testing

After recently reading up on unit testing and having some exposure to it in class, I figured the next step would be to look at integration testing. With plenty of blogs to choose from, all giving pretty similar information on integration testing, I chose to highlight the blog published by Katalon (https://katalon.com/resources-center/blog/integration-testing). This blog provides more information into certain aspects of Integration testing, such as explaining some differences between low- and high-level modules. This extra insight helps piece together a lot of the questions I had while reading other blogs on the topic.

Integration testing is a software testing method that involves combining individual units together and testing how they interact. The main purpose of integration testing is to ensure that each module works together as intended and does not create any unforeseen bugs. Each module should be individually unit tested before integration testing.

Integration testing helps identify bugs that may not be seen in unit testing. These bugs can come from a multitude of places. Inconsistent logic between different programmers, modifying code, data being transferred incorrectly, poor exception handling, and incompatible versions may all lead to bugs that only appear during integration testing. Integration testing helps identify these bugs, and depending on what model is being implemented, may easily help find the locality of the bug.

There are two main approaches to performing integration testing: the big bang approach, and the incremental approach. The big bang approach involves integrating and testing all modules at once. This is great for small systems that do not have much complexity however, this means finding the location of a bug will be harder with a complex system. When systems become larger and more complex it may be best to switch to incremental testing. This is an integration testing method where modules are combined and tested in smaller groups. These groups continue to combine with other groups until the system is tested. Three common incremental integration testing methods are the bottom-up approach, top-down approach, and a hybrid approach. Bottom-up starts by integrating the lowest level modules together before adding higher level modules. Top-down does the opposite where it starts by integrating the highest-level modules before adding the lower ones in as tests pass. The hybrid, also known as the sandwich approach, is a combination that may alternate between testing the top level and bottom level components.

From the blog CS@Worcester – CS Learning by kbourassa18 and used with permission of the author. All other rights reserved by the author.

AI In Testing

For the summer, I will be working on a project revolving around Machine Learning and AI. I have a distinct interest in AI, and I have often wondered how it may affect the jobs of Computer Scientists. Testing specifically has been worked towards automation, but there is also the matter of human intervention required. This gave me a lot of interesting thoughts about how testing might be COMPLETELY automated using AI as the intermediary. My most used blog for testing information, stickyminds, had an article about this exact thought. The article’s title is Examining the Impact of AI on Software Testing: A Conversation with Jeff Payne and Jason Arbon by Todd Kominiak.

The integration of AI into various industries, including software development and testing, has sparked discussions regarding its potential impact. This article is an interview regarding the software testing community, and the outlook of anticipation and apprehension on how AI will reshape their roles and practices. Jeff Payne, the CEO of Coveros and Jason Arbon, CEO of TestersAI, were interviewed to delve into the implications of AI for software testing. Arbon challenges the notion that AI will make testers obsolete.And this is a sentiment that I agree with. He argues that AI will instead increase the importance of testers to ensure software quality, as AI can often err. Arbon states that the current response from testers towards AI has been somewhat ‘lackluster’, with many primarily focusing on experimenting with AI tools rather than fully utilizing its potential. One key piece of information Arbon offers is that testers would need to reevaluate their approach to testing if AI emerged as a driving force in testing. Even as AI technology evolves to automate certain testing tasks, human testers will be required to ensure the reliability and appropriateness of software solutions.This is something I think will be essential for all professions that go the route of automation through AI.  Arbon further suggests that the complexity and scale of software testing will increase alongside with the improvement of AI-testing. In the future, he envisions a pivotal role change for testers, wherein they ask essential questions of the structure and content of the code beyond mere functionality. This would also be beneficial, especially to address diversity concerns in coding, and to keep AI bias in check.

This interview opened my eyes greatly about software testing. Not just how AI will change the roles of testers, but it also changed how I think about testes as a whole. Going into this article I held the belief that a tester’s role was to assure the functionality of the code, but to also be able to address ethical concerns throughout the code, is another role entirely. I am less apprehensive now, about the role of AI in testing and software development for that matter. And I am more optimistic to see how my job will evolve as time goes on.

Source:https://www.stickyminds.com/interview/examining-impact-ai-software-testing-conversation-jeff-payne-and-jason-arbon

From the blog CS@Worcester – WSU CS Blog: Ben Gelineau by Ben Gelineau and used with permission of the author. All other rights reserved by the author.

Technical Review

When working on a big project, in a big team, with a lot of other people working together, things can become confusing. Sometimes, code may not work, or you will not entirely understand what is happening. The best thing to do is step back and take a technical review. Review the code, the goals, and any other areas that may be in need of improvement or assistance. Take the time to straighten things out so you and the team can get back to work efficiently and effectively. But how do you go about a technical review?

In this blog post, Tony Karrer talks about what a technical review is, some ways to identify when you need to review, some strategies, and some areas of review. They describe a technical review as “a deep-dive assessment of your software, infrastructure, team and processes,” and that “it provides findings and recommendations intended to foster a mutual understanding between business and software leaders, shedding light on the current state of your technology and your team.” Some signs that you are in need of a technical review are slow or late delivery, random or persistent bugs, and sleepless nights from strategic worries. However, taking a technical review shouldn’t just be in response to malfunction, it can be due to scaling and new markets, keeping up with competition, outgrowing your stack, changes to the tech team, or simply because you want to be on top of things. Karrer describes four strategies, each different from the other, ranging from general to specific and in depth. They are straightforward analysis, pragmatic assessment, expert recommendations, and finding sessions. After determining which strategy you are comfortable with, you can go ahead and start reviewing. They provide some examples of where you may want to review, including background information to get a general idea of the project, architecture, targeted code, or process and team. If there are areas that need work or are struggling, then that is one hundred percent a spot you want to review. While you are doing that, create a summary and list your findings, and include some recommendations or solutions if you have any. Finally, bring them to the review meeting, where you will review the project together with your team and sort out the issues and find possible solutions.

Doing this in class was actually fairly helpful. I feel like if it was my code, I would have found more benefit in it, but I understand the premise of it, it’s good to have multiple people look at the code and come together and see what kind of issues we found. 

From the blog CS@Worcester – Cao's Thoughts by antcao and used with permission of the author. All other rights reserved by the author.

Testable code

In my last post, I highlighted a blog that gives an overview of unit testing and how we can use it to increase the quality and efficiency of our testing or debugging processes. The point of unit testing is to test “units” or isolated methods, classes, or modules of our code to determine if there are any issues. Writing code with unit testing in mind makes it simpler for developers to debug their code. I felt this next blog, “TDD: Writing Testable Code,” by Eric Elliott, would help further readers’ understanding of writing this kind of code and the benefits that come from it.

Elliott’s blog discusses many aspects of writing testable code, including tight coupling, test-driven development, separation of concerns, and an overview of different data management strategies. He describes how tight coupling limits testability and provides strategies to reduce it, including TDD. He also discusses the benefits of testing first vs. testing after, with test-first being the main focus of test-driven development. He goes on to describe data management strategies.

Reading this post brought my attention to an important aspect of software development, which is influencing a specific type of developer culture that improves the quality of our software as a whole. Writing with testability in mind makes it easier for the users, increases the adaptability of our code, and allows us to fix issues without revamping the whole system. 

The breakdown of tight coupling and the different forms they can take was comprehensive, giving me a straightforward explanation of what I should consider when writing code. Elliott gave 11 forms and causes of tight coupling, including parent class dependencies, temporal coupling, and event chains. TDD was something I was already aware of, but the benefits of using and detriments of not using TDD were still insightful for how we should be taking our development step by step, failure by failure until we have testable code. The separations of concerns were interesting. He says we should isolate our code by concerns, including business and state logic, user interface, and I/O & effects. Separating these into separate modules allows us to test and understand each independently. 

I plan to consider all of these strategies when developing projects in the future. I will use test-driven development to limit the tight coupling of my modules, classes, methods, etc., to ensure that my code is readable and testable and that each of its concerns are independent from each other. 

From the blog CS@Worcester – KindlCoding by jkindl and used with permission of the author. All other rights reserved by the author.

Blog #4: First Exposure to Testing

Before being exposed to JUnit, my only experience with automated testing was through CxxTest while I was learning C++. Once I started to learn JUnit both the syntax and general format seemed to ring a bell. This caused me to check back at my previous C++ programs to find that the assertion-based testing was identical to that of JUnit. After seeing these two side by side I was curious about the comparisons between these two testing frameworks and whether CxxTest had any advantages over that of JUnit.

While looking for an article discussing the full capabilities of CxxTest, I stumbled upon a blog, Exploring the C++ Unit Testing Jungle by user @noel_llopis, which seemed to provided extensive explanations of each popular testing framework for C++ at the time. Do note that this post was written in 2010, so popular testing frameworks from then may have faded into obscurity and new frameworks may be used in their place. My main allure to this article was Llopis’s section describing his experience with CxxTest and how testing frameworks required a little more work from the user back in 2010. Llopis praised CxxTest for it’s relative simplicity in how it’s imported into a program and how it requires much less dependencies. From his explanation, I’ve learned that testing frameworks used to require certain formatting within the file and potentially other libraries for the tests to function. CxxTest, similar to JUnit, can operate by itself with much less dependencies than it’s competitors (at the time). A feature that JUnit lacks that CxxTest contains is the ability to natively mock objects. JUnit does have this ability, but requires the user to add another resource to JUnit meanwhile, CxxTest has this functionality immediately. One downside that the author does mention is that CxxTest did require “use of a scripting language as part of the build process”(Llopis), this may create a barrier of entry to less experienced developers.

While comparing these two testing frameworks, I found myself asking a new question of ‘how accessible or inaccessible were testing frameworks of the past’. Llopis seemed to be enthusiastic about features that I held to be common for all frameworks to have. Additionally writing about this did make me wish that I spent more time in the past programming with C++ outside of classes. Reading this did help expand my knowledge of how CxxTest operates, so when I do inevitably go back to refine my C++ skills I’ll be ready to pickup this framework once more. Between JUnit and CxxTest, there are many surface layer similarities, as both are unit testing frameworks. The differences seem to lie in smaller features that some developers may depend on, such as mocking. After having experience in both I find it hard to chose one or the other as they both generally function the same and have similar levels of accessibility.

-AG

Source: https://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.

Blog #3: Testing Beyond JUnit

Throughout my experience with Software Quality Assurance, I’ve used two different types of testing frameworks. The most recent is JUnit as I’ve just learned how to use it earlier this year, and the other being CxxTest which I’ve forgotten most of (including its name) until writing this. Each of these operates for different languages, JUnit for Java, and CxxTest for C++. The differences between these two made me curious whether a framework worked for more than one language at once. Note, that frameworks such as CTest that work for both C and C++ I do not include as both exist within the C family. As I searched for a framework that answered my question I stumbled upon Selenium and subsequent articles comparing it to that of JUnit. These comparisons drew my attention and sent me down a path to understand what Selenium is.

Before getting into specifics I must introduce a few definitions that help differentiate these two frameworks. Unit testing is the method of testing smaller increments of source code to ensure that each ‘unit’ works as intended and meets the developers’ specifications. This lays the foundation for later development. End-to-end testing focuses on testing components where the user interacts with the program directly. This tests components such as UI and web applications. JUnit focuses on unit testing with code programmed in Java meanwhile, Selenium focuses on end-to-end testing with multiple different languages including Java. Fundamentally these frameworks are testing different aspects of software development, therefore any comparison between the two must be taken with a bit of nuance.

With definitions aside, we can now talk about what makes these frameworks unique. A community post on StackShare, JUnit Vs Selenium, gives a concise view into what make these frameworks differ. My experience with frameworks has only existed within IDEs however, Selenium is supported on browser and web driver tools. Additionally, Selenium tests run within a browser, meanwhile JUnits require a Java Virtual Machine to be created. One downside of Selenium is its requirements for dependencies, as opposed to JUnit which can be imported into your program. I’m more partial to JUnit as I only have experience with back-end development, so Selenium isn’t directed towards a developer such as myself. With that being said, those who are more experienced in front-end development may find the requirements for dependencies and browser configurations to be a small cost for its flexibility with testing.

I have a little experience with front-end development, so I can understand how the tools provided by Selenium could be invaluable. A lesson I’ve learned from this dive into Selenium is that all aspects of development (Ex: Front vs Back end) will require some form of automated testing. Additionally, testing that may be easy for one department may be more complex for another. With these different areas of testing also come different methods. End-to-end testing will be noticeably different than unit testing, as each method focuses on a specific function of the software.

-AG

Source: https://stackshare.io/stackups/junit-vs-selenium

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.

CS-448: Week 11

Confront Your Ignorance

This pattern is about confronting your ignorance. After going through the “Expose your ignorance” pattern, where you identify gaps in your skillset, the next step is to confront those gaps. The main problem that comes from this pattern is that there are techniques and tools that need to be learned, but the process of actually learning these techniques in unclear. Some of these may be techniques that everyone already knows, so there is an expectation that everyone be competent in that area.

The solution to this pattern, with the list generated from “Expose your ignorance”, is to pick an item from that list and actively fill the gaps for that item. How this can be done is a matter of preference. Some prefer to research everything they can about a topic, such as introductory articles and forums. Others prefer a more hands on approach by going straight into the topic by building a simple project. Although confronting your ignorance is mostly an individual task, asking mentors and experienced people is important to learn what information they can share with you. Asking mentors can also be beneficial as they may have learned tricks that they found themselves that may not be discoverable through documentation.

As mentioned before, this pattern is do be done after exposing your ignorance. Exposing your ignorance is a critical step because it makes the gaps in your knowledge public, which makes your team aware of those gaps and your learning ability. Having your gaps be on display for your team encourages an environment where failure and learning are acceptable as they both play a large role in an apprentice transitioning to become a journeyman and so on.

Conclusion

This pattern is helpful to me because confronting my ignorance is something I struggle with at times. The pattern states that exposing your ignorance without confronting can lead to excessive humility and helplessness which are feelings I have had in the past, so this pattern helps to give some encouragement to start confronting that gaps in my knowledge. After reading the patterns “Expose your ignorance” and “Confront your ignorance”, I feel much more confident in the future on how I can improve my skills in a methodical manner.

From the blog CS@Worcester – Zack's CS Blog by ztram1 and used with permission of the author. All other rights reserved by the author.

Security Testing

Security testing is an important part of software testing, it makes sure to find any weaknesses in the software and if our data is protected from intruders. Security testing is important because it helps us ensure that our software has no weak spots and that our data and information are safe. There are so many versions of security testing like password cracking, penetration testing, and vulnerability. Those three topics are some of the main ideas of security testing and understanding the importance of keeping personal information safe. Password cracking testing is when the system identifies weak passwords and helps make sure that people use stronger passwords. Penetration testing is done by assessing the system by using different techniques, the purpose of it is to protect the important data of the users who don’t have access to the system. Vulnerability testing is used to identify the weakest attributes in the system that can be used by destructive software. A lot can cause the software to be vulnerable like a bug in the software, wrong software testing, and the presence of a bad code. 

I chose this topic because security testing is important in our software testing class and this blog has a lot of information on how it works. It’s important to know more about how our private data is saved from hackers and how safe our software is. I believe that this helped me learn more about security testing like how penetration testing uses different techniques like the black box and white box testing method to detect any vulnerabilities in our software. Security testing is to make sure that applications are not able to leak private information and can handle a threat like hackers. I already knew before reading how important it is for our information and making sure that our passwords are not easy for hackers to figure out but I thought penetration testing was interesting to learn about because I didn’t know it used so many different techniques to find any vulnerabilities. Security testing is about keeping our information safe as well as our software safe from any intruders and I believe that this blog gave the best explanation and information about it.

https://www.idexcel.com/blog/tag/security-testing/

From the blog CS@Worcester – Kaylene Noel's Blog by Kaylene Noel and used with permission of the author. All other rights reserved by the author.