Category Archives: CS-443

Code Review Essentials: A Critical Tool for Development

Blog Entry:

As a student deeply involved in computer science, understanding the significance and methodologies of code review is pivotal. This week, I chose to delve into an article from freeCodeCamp, titled “Code Review: The Ultimate Guide,” which explores the intricacies of code reviews in software development. This resource is particularly relevant to our ongoing discussions in class about software quality and maintenance.

Summary of the Article:

The article comprehensively outlines what code review entails and why it is a critical practice in software development. It discusses the benefits, such as catching bugs early, improving code quality, and fostering team knowledge sharing. Moreover, it provides practical tips on how to conduct effective code reviews, emphasizing the importance of a constructive attitude and specific, actionable feedback.

Reason for Selection:

I selected this resource because, as we learn to code and develop software, understanding the peer review process is essential for professional growth and skill enhancement. This article not only complements our coursework but also offers practical advice that can be immediately applied in any coding environment.

Personal Reflection:

Reading about the detailed processes and benefits of code reviews has been enlightening. I learned that effective code reviewing goes beyond merely finding errors; it is about collaboration, learning, and improving as a team. This has changed my perspective on coding assignments and projects. Instead of viewing them as solitary tasks, I now see them as part of a broader, collaborative process.

I am particularly struck by the emphasis on the mindset and communication skills needed during code reviews. The idea that feedback should be constructive and focused on the code rather than the coder is something I plan to carry forward into my professional life. This approach not only minimizes potential conflicts but also enhances the learning environment, making it more open and conducive to improvement.

Application in Future Practice:

Going forward, I expect to apply the principles from this article in my class projects and eventually in my professional work. Understanding the dynamics of effective communication and feedback within code reviews will be crucial as I work with others on software development projects. This will help in creating more robust, error-free software and in building a supportive team environment.

It is an important skill for everyone regardless of their place in the chain of development. The principles also apply outside of the field of computer science as a successful review is a part of any team process.

From the blog CS@Worcester – Abe's Programming Blog by Abraham Passmore and used with permission of the author. All other rights reserved by the author.

Levels of Testing

What is Software Testing?

Software testing is the process of evaluating whether a program or application functions as intended. It serves as a form of quality control, aimed at identifying and rectifying any discrepancies or errors before production use. For instance, in the scenario of a budget-tracking app, testers would verify the accuracy of data input, functionality of buttons, and ease of navigation, among other aspects, to ensure a seamless user experience.

Levels of Testing

Testing levels, also referred to as levels of testing, encompass distinct phases or stages of assessing software throughout its development cycle. These levels target specific aspects of functionality, contributing to enhanced quality assurance and reduced defects. The primary testing levels include:

  1. Unit Testing: This level involves testing individual components, such as methods and functions, in isolation to ensure their correctness and functionality. Automated unit testing is often recommended, allowing for efficient evaluation of code behavior. Unit testing is crucial for identifying bugs at an early stage of development, as it allows developers to test each component independently. By writing automated tests that cover various scenarios, developers can ensure that their code performs as expected.
  2. Integration Testing: Integration testing assesses the interaction and integration between various units or modules within the software. It aims to detect any issues arising from coding errors or integration conflicts, facilitating seamless collaboration between components.During integration testing, developers test how different modules work together to ensure that they function properly as a whole. This ensures that any errors or inconsistencies between modules are detected and addressed before moving to the next stage of development.
  3. System Testing: System testing evaluates the integrated system as a whole, verifying its compliance with specified business requirements. Automation tools can be leveraged to streamline system testing processes, ensuring comprehensive assessment of functional and non-functional aspects.System testing focuses on evaluating the entire software system against the defined requirements and specifications. It includes testing functionalities, performance, security, and compatibility to ensure that the software meets the desired standards.
  4. Acceptance Testing: Acceptance testing focuses on validating the system’s functionality and performance against user-defined criteria. It encompasses aspects such as usability, security, compatibility, and reliability, either through manual evaluation or automation tools.Acceptance testing involves end-users or stakeholders testing the software to ensure that it meets their expectations and requirements. This stage ensures that the software is ready for deployment and use in a real-world environment.

Software Testing Sequence

A structured testing sequence is essential for thorough evaluation of software functionality. This sequence comprises four main stages:

  1. Unit Testing: Testing individual components to ensure functionality.
  2. Integration Testing: Evaluating interactions between integrated units.
  3. System Testing: Assessing the integrated system as a whole.
  4. Acceptance Testing: Validating the system against user-defined criteria.

Conclusion

Software testing is a critical process in the software development lifecycle. It serves as a safeguard against defects and ensures that software meets user expectations and business requirements. Skipping or neglecting testing can have detrimental consequences, undermining the utility and effectiveness of the software.

From the blog CS@Worcester – CS: Start to Finish by mrjfatal and used with permission of the author. All other rights reserved by the author.

Is Software Testing Gradually Dying?

Hello everyone,

Today I want to discuss here about a career in software testing. If we want to find a job in software testing in the future, then the following questions are what we need to consider: Is software testing dying? Will it eventually be replaced by AI? Is a testing team necessary in the company?

Here is a video I want to share with you. The author of the video, as a professional who has been engaged in the software testing industry for more than 10 years, answers the question about the future of this job.

Link:https://www.youtube.com/watch?v=0bSnud-RakA

Is Software Testing Gradually Dying? Future Scope of Software Testing

by Software Testing Mentor

First, he told us the answer is no, software testing will not disappear. He also detailed the reasons and concerns of some people who are about to enter this industry in the video. For example: Can testing work be left to the development team? Can software testing be performed by AI instead of humans?

He mentioned that nowadays, some companies have indeed removed their testing teams and handed over the testing work to the development team. This seems feasible, but it raises an important question, which is the mentality and logic of the development team. The job of the development team is to write the program and run it successfully, but the logic and mentality of testing work are different from development. This kind of logical switching will be a huge challenge for the development team and will greatly increase their workload, and the effect will not be very good. So it is not feasible for the development team to take care of the testing work.

Then there is the issue about AI that everyone is concerned about. We all know that AI technology is developing rapidly these days. I have always believed that the development of AI is positive and beneficial to us humans. It can help us humans complete some daily tasks, but if we want to completely replace humans, it may be possible in the future, but it is still not possible now. The author mentioned that today’s AI technology can only complete the analysis of a fragment, and it cannot associate the inspection and trial and error of all codes. Moreover, AI cannot yet achieve human logic. All current AI technologies cannot replace the work of software testing.

Finally, he also mentioned that although software testing technology has been innovating, the methods and logic have never changed. A good testing team can be necessary for the company, product, or customers.

From the blog CS@Worcester – Ty-Blog by Tianyuan Wang and used with permission of the author. All other rights reserved by the author.

Week 14

Considering this week we only had one day of class it’s good to reinforce the ideas we learned to spread out in separate classes. I was in a search this week for an article that went into depth about software technical reviews. Software technical reviews are very important; understanding the fundamentals is a key component in the field. 

The main function of a software technical review is to examine a document either in a group or alone and find errors or any defects inside the code. This is done to verify various documents to find if they reach specifications, system design, test plans, and test cases. An important thing to consider is this is a step to make sure the client gets clarity of the project and stays informed on how it’s going. In addition, finalize any changes to reach the requirements before being released to the market. This allows for improved productivity, makes the testing process cost-effective, fewer defects to be found outside the team, and reduces the time it will take to create a technically sound document. The main three types of software reviews include software peer review, software management review, and software audit reviews. The process of software review is simple if you are informed of the implementations taking place. First is the entry evaluation which is just a standard checklist to know the basis for the review. Without a checklist, you will be pulling on strings to find what is wrong with the code or what it’s missing. Then comes Management preparation ensures that your review will have all the required resources like staff, time, and materials. Next is review planning where you create an objective that comes from the team. You then move on to preparation where the reviewers are held responsible for doing their specific task. Lastly, examination and exit evaluation where the group meets up and is discussed to make the team on the same page and verify any discoveries.

Reading this article allowed me to see other steps that are taken to do a software technical review. If we as a team were able to create an objective of what to search for inside the code last week it would have been more goal-oriented instead of randomly searching for faults in the code. As a team, it would have been great to have a more organized group so then when we come together we have an understanding of what we should all find. I would like to see how it would work trying to explain to someone who doesn’t code what has been done and show them that their money is being placed in the right place. Other than that this is a great way to reduce time and be in unison with your team.   

https://medium.com/@vyashj09/software-technical-reviews-in-software-testing-what-is-software-technical-review-321462039f4f#:~:text=A%20software%20technical%20review%20is,an%20object%20in%20the%20software.

From the blog cs-wsu – DCO by dcastillo360 and used with permission of the author. All other rights reserved by the author.

Performance Testing

For this week’s blog, I decided to research a bit on performance testing because it is listed as a topic for a class. While doing research, I found a blog called “How To Do Performance Testing: Tips And Best Practices” by Volodymyr Klymenko. In this blog, he discusses the primary purpose of this kind of software testing, types of performance testing, its role in software development, common problems revealed by performance testing, tools, steps, best practices, and common mistakes to avoid. The blog was well organized and clearly simplified each topic.

The purpose of performance testing is to find potential performance bottle necks to identify possible errors and failures. This is to ensure that the software meets the performance requirements before it is released. Some types are load testing(response under many simultaneous requests), stress testing ( response under extreme load conditions or resource constraints), spike testing (response under a significant and rapid increase in workload that exceeds normal expectations), soak testing(simulates a gradual increase in end users over time to test the system’s long-term stability), scalability testing(how the system scales when the volume of data or users increases), capacity testing (which tests the traffic load based on the number of users but differs in scope) and volume testing(which tests the response to processing a large amount of data).

The next section discusses the role of performance testing. I found some of the bullet points to be a bit obvious like “improving the system’s overall functioning”, “monitoring stability and performance”, and “assessing system scalability”. The other points mentioned were “failure recovery testing”, ”architectural impact assessment”, “resource usage assessment” and “code monitoring and profiling”.

After discussing the role of performance testing, the author discussed the common issues revealed by performance testing. The problems discussed were speed problems, poor scalability, software configuration problems, and insufficient hardware resources. This section is broken down into many bullet points to explain each issue. I found it to be very beneficial in simplifying the ideas presented.

The last three sections are the steps for performance testing, best practices, and common mistakes to avoid. The steps presented in this article for performance testing were: 1.Defining the Test Environment, 2.Determination of Performance Indicators , 3. Planning and Designing Performance Tests , 4. Setting up the Test Environment, 5. Development of Test Scenarios, 6. Conducting Performance Tests , and 7. Analyze, Report and Retest. The section after explaining the steps includes some best practices like “Early Testing And Regular Inspections “ and “Testing Of Individual Blocks Or Modules”. Some of the common mistakes to avoid that I found to be noteworthy were “Absence of Quality Control System” and “Lack of a Troubleshooting Plan”. While working on the Hack.Diversity Tech.Dive project, I had to do some performance testing and even used a tool that was mentioned in the article called Postman. In the future, I will definitely be more prepared to do performance testing now that I can reference this article.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

security testing

For this week’s Software Testing blog post, I wanted to have a glance into the world of security testing.

In a big picture sense, security testing is a simple concept to understand even from a layman perspective. It entails testing software for any vulnerabilities, risks or threats that can be posed and to any stakeholders in the software that could cause a negative impact. Stakeholders can include the company that creates and / or deploys the software, the end-user, and even the software itself.

There are many different types and methods to security testing, and in the blog post I would like to go through a couple that caught my mind, with information courtesy of Oliver Maradov’s very thorough security testing blog post on BrightSec.

Before getting into the actual methods, I found some key principles at the start of the article noteworthy to mention as well. Confidentiality and authorization have to do with limiting access to sensitive data, with authorization only allowing access on the basis of permission. Authentication is the principle of verifying identities that access data. Integrity and availability are crucial to preserving the consistency of data and making sure that is accessible when needed without failure. Lastly, non-repudiation sets a principle of of logging while further making sure data is accessible.

Now, for the first subject in the article that I found interesting. What first caught my eye was the use of the different ‘box’ testing methodologies, that is, black box, white box and grey box testing. The reason I was interested is because the application of black box testing makes a lot of sense (compared to the standard software development application, in my opinion) in a security context. Essentially, where black box testing for software developers mostly deals with the specifications prior to writing the code for the system, black box testing in a security context means approaching software that has been written through the prespective of an attacker. We typically see this through ethical hacking and penetration testing. I just found this a lot more compelling than the software engineer side of black box testing.

The second subject I found compelling was the section on DevSecOps. The idea of DevSecOps is to, as the name implies, merge the software development, security and operations process together to ensure the whole endeavor of creating a great piece of software is built from the ground up with good principles in a multi-faceted way, and to ensure that each team member has a level of understanding of the key principles. As such, the end product should (hopefully) become a product of a strong approach to each process.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

Potential Mistakes when Working with Test-Driven Development

Based on this article by Denis Kranjcec: How I failed at Test-Driven Development and what it took to get it right

For the past couple weeks, Test-Driven Development has been the focus of our learning. We’ve done a small project in class creating a simple rover instruction program, which built our confidence and knowledge on how TDD works. Then, we used what we learned to individually create a unique word counter with string input. Since then, I have taken a slight liking to TDD, although it may just be a honeymoon phase. Well, my last blog post was about why Test Driven development is so good. This time, I sought out to find peoples’ personal failures with TDD, and why it failed. This is how I found How I failed at Test-Driven Development and what it took to get it right by Denis Kranjcec. 

Kranjcec begins his article by summarizing that he took a shot with TDD, and missed. But why did he miss? He dives into the reasons, and after a quick discussion on what TDD is and how it is supposed to improve code, he came to the conclusion that it wasn’t TDD’s fault, it was his

To start off, Kranjcec was using DDD (Domain-Driven Design), which led to his code being easy to comprehend, but his tests were more complex and difficult to maintain. After trying TDD again, he eventually realized that his biggest mistake was “starting with the simplest use cases and building up to the more complex ones” (Kranjcec). This seems to be a common problem with TDD users, as they tend to stick to more complicated tests than easier ones in exchange for fewer tests total. 

His solution was to change the way he wrote code and designed. This was done by breaking his work down into many smaller steps and lots of commits each day, in addition to refactoring his code. In short, he noticed his code was much simpler to understand and shorter, with other benefits as well.

I selected this article because after only seeing the benefits of TDD, I wanted to see the struggles people had with it. Although this article wasn’t about the faults of TDD, it shows the personal mishaps or mistakes that one can make when working with TDD, which can lead to a greater understanding of it in the end. 

I did like this article because it was an alternate perspective on how TDD can go. I learned to be patient with my code, and to try and stick with simpler tests and design in order to make TDD more effective. I think this article is a good reminder for self reflection on your own work, as it may not always be another’s fault if it doesn’t work. I hope to apply such knowledge in future projects using TDD, keeping in mind that simplicity is key.

From the blog CS@Worcester – Josh's Coding Journey by joshuafife and used with permission of the author. All other rights reserved by the author.

Use Of AI in Software Testing


 The recent explosion of AI has invaded almost every industry
nowadays. It has become something of a buzzword, with many companies loudly
proclaiming how they are making use of the emergent technology to benefit
their customer bases. Chat gpt and other types of  AI have already
started creating all sorts of problems within the academic setting, giving
many students an easy out on writing essays. Not only that, but AI is also
now being attributed as one of the main driving forces behind massive
layoffs within the tech industry and beyond.

All of that being said, how can AI be utilized to improve software testing.
I know that immediately trying to think of ways for AI to replace even more
jobs within the software industry can be a bit jarring after bringing up the
problems it has already created, but I wanted to look into how the future
may look if we were to utilize this technology to expedite the testing
process. It is entirely possible that we could teach proper testing
etiquette to an AI model and have it automatically produce test cases.
Future IDEs could have an auto generated test file feature added to them to
help developers quickly create basic test cases. Well, I didn’t have to
speculate for long as one google search later I had already found a website
for using an AI to create test cases. This does pose a rather worrying
question about the speed at which AI is developing and whether our modern
society can keep up with it, but I would rather not dwell on such topics.
Now, there have been concerns about the proliferation of AI potentially
poisoning the well of data that they use, and I do believe that certain
measures will need to be taken to prevent another event like the dot com
bubble burst from happening again today. 

https://www.taskade.com/generate/programming/test-case 


 Another use case for artificial intelligence that has been
proposed  is the generation of “synthetic data”. This is data created
to mimic real life data in order to test and train programs. DataCebo is one
such company, and has been using an AI to create synthetic data. Called
Synthetic Data Vaults, or SDV for short, These systems are usually sold to
data scientists, health care companies, and financial companies. The purpose
of creating realistic synthetic data is so companies can train programs in a
number of scenarios without relying on historical data, which was limited to
that which has already happened. This also gets around privacy issues of
companies using people’s private data unethically.

https://news.mit.edu/2024/using-generative-ai-improve-software-testing-datacebo-0305

From the blog CS@Worcester Alejandro Professional Blog by amontesdeoca and used with permission of the author. All other rights reserved by the author.

Use Of AI in Software Testing


 The recent explosion of AI has invaded almost every industry
nowadays. It has become something of a buzzword, with many companies loudly
proclaiming how they are making use of the emergent technology to benefit
their customer bases. Chat gpt and other types of  AI have already
started creating all sorts of problems within the academic setting, giving
many students an easy out on writing essays. Not only that, but AI is also
now being attributed as one of the main driving forces behind massive
layoffs within the tech industry and beyond.

All of that being said, how can AI be utilized to improve software testing.
I know that immediately trying to think of ways for AI to replace even more
jobs within the software industry can be a bit jarring after bringing up the
problems it has already created, but I wanted to look into how the future
may look if we were to utilize this technology to expedite the testing
process. It is entirely possible that we could teach proper testing
etiquette to an AI model and have it automatically produce test cases.
Future IDEs could have an auto generated test file feature added to them to
help developers quickly create basic test cases. Well, I didn’t have to
speculate for long as one google search later I had already found a website
for using an AI to create test cases. This does pose a rather worrying
question about the speed at which AI is developing and whether our modern
society can keep up with it, but I would rather not dwell on such topics.
Now, there have been concerns about the proliferation of AI potentially
poisoning the well of data that they use, and I do believe that certain
measures will need to be taken to prevent another event like the dot com
bubble burst from happening again today. 

https://www.taskade.com/generate/programming/test-case 


 Another use case for artificial intelligence that has been
proposed  is the generation of “synthetic data”. This is data created
to mimic real life data in order to test and train programs. DataCebo is one
such company, and has been using an AI to create synthetic data. Called
Synthetic Data Vaults, or SDV for short, These systems are usually sold to
data scientists, health care companies, and financial companies. The purpose
of creating realistic synthetic data is so companies can train programs in a
number of scenarios without relying on historical data, which was limited to
that which has already happened. This also gets around privacy issues of
companies using people’s private data unethically.

https://news.mit.edu/2024/using-generative-ai-improve-software-testing-datacebo-0305

From the blog CS@Worcester Alejandro Professional Blog by amontesdeoca and used with permission of the author. All other rights reserved by the author.

Use Of AI in Software Testing


 The recent explosion of AI has invaded almost every industry
nowadays. It has become something of a buzzword, with many companies loudly
proclaiming how they are making use of the emergent technology to benefit
their customer bases. Chat gpt and other types of  AI have already
started creating all sorts of problems within the academic setting, giving
many students an easy out on writing essays. Not only that, but AI is also
now being attributed as one of the main driving forces behind massive
layoffs within the tech industry and beyond.

All of that being said, how can AI be utilized to improve software testing.
I know that immediately trying to think of ways for AI to replace even more
jobs within the software industry can be a bit jarring after bringing up the
problems it has already created, but I wanted to look into how the future
may look if we were to utilize this technology to expedite the testing
process. It is entirely possible that we could teach proper testing
etiquette to an AI model and have it automatically produce test cases.
Future IDEs could have an auto generated test file feature added to them to
help developers quickly create basic test cases. Well, I didn’t have to
speculate for long as one google search later I had already found a website
for using an AI to create test cases. This does pose a rather worrying
question about the speed at which AI is developing and whether our modern
society can keep up with it, but I would rather not dwell on such topics.
Now, there have been concerns about the proliferation of AI potentially
poisoning the well of data that they use, and I do believe that certain
measures will need to be taken to prevent another event like the dot com
bubble burst from happening again today. 

https://www.taskade.com/generate/programming/test-case 


 Another use case for artificial intelligence that has been
proposed  is the generation of “synthetic data”. This is data created
to mimic real life data in order to test and train programs. DataCebo is one
such company, and has been using an AI to create synthetic data. Called
Synthetic Data Vaults, or SDV for short, These systems are usually sold to
data scientists, health care companies, and financial companies. The purpose
of creating realistic synthetic data is so companies can train programs in a
number of scenarios without relying on historical data, which was limited to
that which has already happened. This also gets around privacy issues of
companies using people’s private data unethically.

https://news.mit.edu/2024/using-generative-ai-improve-software-testing-datacebo-0305

From the blog CS@Worcester Alejandro Professional Blog by amontesdeoca and used with permission of the author. All other rights reserved by the author.