Author Archives: Winston Luu

Week 14 Post

This week’s post will cover a powerful and popular frontend tool, Node.js, that enables developers to execute JavaScript code outside the browser. By allowing JavaScript to run on the server side, Node.js has transformed the way developers approach web development, enabling full-stack JavaScript applications. The blog by Matthew Clark highlights the common uses for Node.js, for example, Node.js is often used for building real-time applications, such as chat apps and collaborative tools. Additionally, Node.js is a popular choice for developing REST APIs and microservices, enabling efficient handling of API requests and data exchanges between client and server. Another common use case is single-page applications (SPAs). These applications rely on seamless interactions between the frontend and backend, and Node.js allows developers to use JavaScript across the entire stack.

The V8 Javascript engine compiles JavaScript into machine code, ensuring fast execution, making speed one of the primary advantages of Node. The speed of a platform is crucial to creating an enjoyable user experience. No one enjoys a slow program. Additionally, Node can handle large volumes of simultaneous connections efficiently. Arguably the most important advantage is scalability. Node.js is designed to scale horizontally, allowing applications to handle increasing workloads by adding additional resources without significant architectural changes. Developers can use JavaScript on both the client and server sides, reducing context switching and improving productivity. This consistency simplifies debugging and maintenance.

Node is used by major companies like Netflix and PayPal. Netflix is particular uses Node.js to improve application performance, specifically for the server-side rendering of their video streaming platform. Its lightweight nature has helped Netflix handle millions of user requests with reduced startup time and increased efficiency. PayPal made the switch to Node to unify their frontend and backend development. This shift resulted in a significant reduction in development time and improved application response times.

One drawback to this tool is the asynchronous programming model, this model relies heavily on callbacks and promises. While this enhances performance, it can increase the complexity of debugging and code maintenance.

I chose this topic of research because it’s one of the most popular connection tools for frontend and backend development, and for good reason. One other tool that I came across during my research was Deno, a modern runtime for JavaScript and TypeScript, created by Ryan Dahl, the original developer of Node. Unlike Node, Deno is secure out of the box. It runs scripts in a sandboxed environment, requiring explicit permission to access files, networks, or the environment. Additionally, Deno has native TypeScript support and doesn’t require additional package managers.

Blog Post: https://dev.to/mattclark1025/why-node-js-for-web-development-in-2020-2ebc

From the blog CS@Worcester – Computer Science Through a Senior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Week-13 Post

This week’s post will cover REST APIs, Representational State Transfer Application Programming Interfaces. One of the main key principles of RESTful APIs is the seperation between the frontend UI the user interacts with and the backend server. Postman’s blog highlights this as, “The client’s domain concerns UI and request-gathering, while the server’s domain concerns focus on data access, workload management, and security”. The primary purpose of REST APIs is to allow different software systems to interact and exchange data over the web. REST mainly focuses on stateless communication, where each request from a client contains all the information needed for the server to process it.

REST APIs use HTTP methods and standard URL structures to enable communication between clients and servers. HTTP methods play an essential role in REST APIs. These methods correspond to CRUD (Create, Read, Update, Delete) operations in software. The POST method is used to create, while GET retrieves data from the server. PUT and PATCH are used to update existing data, with PUT replacing the entire resource and PATCH modifying specific parts. DELETE removes data. In addition, REST APIs use status codes to indicate the outcome of an operation, For example, a 200 status code indicates a successful operation, 201 signifies resource creation, 404 means a resource was not found, and 500 represents a server error. Including appropriate status codes in API responses helps clients understand the results of their requests and handle errors effectively.

The blog post I researched by Postman highlights how REST is widely used across various industries. For example, e-commerce platforms use REST APIs to manage product information and process orders. Social media applications utilize REST APIs to handle user profiles and posts. Cloud services often provide REST APIs to allow developers to interact with their resources programmatically. The blog also mentions another type of API called SOAP, standing for Simple Object Access Project. SOAP is considered a protocol, while REST is considered a set of guidelines. Unlike REST which uses methods like JSON, URLs, and HTTP, SOAP uses XML for sending data. One of the main reasons why SOAP might be preferred over the more popular REST is because SOAP supports WS-Security, which provides a framework for securing messages, including encryption, digital signatures, and authentication. This makes SOAP more suitable for applications handling sensitive data. Corporations like banks and hospitals dealing with sensitive user information could utilize to prevent information breaches.

These APIs provide a consistent way for systems to interact and exchange data while adhering to a set of well-defined principles. By understanding HTTP methods, status codes, and data formats, developers can create APIs that users can understand and use.

Blog: https://blog.postman.com/rest-api-examples/

From the blog CS@Worcester – Computer Science Through a Senior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Introduction Post for CS343

Hello, this is my introductory post for CS343. Feel free to check back every now and then to see what I blog about in this course.

From the blog CS@Worcester – Computer Science Through a Senior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Week-18 Post

My second post this week will recover the three elements of unit testing: Boundary Value Testing, Equivalence Class Testing, and Decision Table-Based Testing. Each play a crucial role in validating software behavior and functionality. Boundary Value Testing focuses on the edges of input ranges. This technique identifies defects at the boundaries of input domains, where errors are most likely to occur. By testing the minimum, maximum, and edge values, testers can catch issues that might arise from off-by-one errors or other boundary-related bugs. This method is particularly effective because boundary conditions are common sources of defects in software applications. To utilize boundary value testing, first determine the minimum and maximum values for each input field, and second create test cases that include the boundary values (e.g., minimum, maximum, just inside, and just outside the boundaries).

Equivalence Class Testing divides input data into equivalent partitions, or classes, where test cases are expected to produce similar results. Instead of testing every possible input, testers select representative values from each class, significantly reducing the number of test cases needed. This method ensures that different inputs within the same class are treated equally by the software, helping identify any inconsistencies or unexpected behaviors across various input ranges. To utilize equivalence class testing, first group input values that are treated similarly by the system into classes, and second choose one representative value from each class for testing.

Decision Table-Based Testing involves creating a table that maps different input conditions to their corresponding actions or outputs. This technique is especially useful for testing complex business logic and decision-making processes. By systematically covering all possible combinations of inputs and their respective outcomes, decision tables help ensure that all scenarios are accounted for and validated. This method enhances the thoroughness of testing by providing a clear and structured approach to handling diverse input conditions. To utilize decision table-based testing, first list all possible conditions (inputs) and actions (outputs), and second create a table with all possible combinations of conditions and their corresponding actions.

Boundary Value Testing, Equivalence Class Testing, and Decision Table-Based Testing are powerful techniques that enhance the effectiveness and efficiency of software testing. These testing techniques help ensure that software applications are robust, reliable, and capable of handling various input scenarios effectively. By incorporating these methods into your testing strategy, you can enhance test coverage, identify potential issues early, and deliver high-quality software that meets user expectations and business requirements.

Blog Post: https://celestialsys.com/blogs/software-testing-boundary-value-analysis-equivalence-partitioning/

From the blog CS@Worcester – Computer Science Through a Junior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Week 18 Post

This post I will cover integration testing and why we use it today. Integration testing is a critical phase in the software development lifecycle, focusing on the integration of individual components into a cohesive system. It ensures that various modules or subsystems work together as intended. One of the primary challenges in integration testing is ensuring comprehensive coverage of interactions between different components. Identifying the right integration points and scenarios to test can be complex, especially in large-scale projects with numerous dependencies.

Selecting appropriate test cases to validate integration points is crucial. It requires understanding how components interact and designing tests to simulate these interactions effectively. Failure to cover all integration scenarios may lead to undetected defects, impacting the reliability and functionality of the software.

Moreover, integration testing often involves testing across different environments and platforms, adding to the complexity. Ensuring compatibility and consistency across various configurations is essential for delivering a robust product. One of the primary hurdles is achieving comprehensive test coverage across all integration points. Prioritizing critical integration points and designing effective test scenarios are essential to address this challenge.

Another challenge is managing the dependencies and external services during integration testing. Mocking or simulating external dependencies may be necessary to isolate various parts for testing, but it can also introduce its own set of challenges, such as maintaining realistic testing environments.

Furthermore, integration testing requires coordination among development teams working on different modules or services. Synchronizing changes and ensuring compatibility between components can be challenging, particularly in agile or distributed development environments.

Frameworks like Selenium are helpful for automating web browser interactions to test integrations between web components. For broader integration testing needs, companies might choose tools like Katalon Studio, which offers a comprehensive suite for web, mobile, desktop, and API testing. Additionally, some companies leverage enterprise-grade solutions like IBM Rational Integration Tester that provide robust features for complex integrations and compliance requirements. Ultimately, the choice of tool depends on the specific needs of the project and the company’s development environment.

Integration testing verifies the interactions between software modules, ensuring they function seamlessly as a unified system. Unlike unit testing, which examines individual components in isolation, integration testing focuses on how these components integrate and communicate with each other. It plays a crucial role in detecting issues arising from the integration of diverse elements, such as incompatible interfaces or conflicting behaviors. By identifying and addressing these issues early in the development process, integration testing helps prevent costly errors from surfacing in production. It’s step towards delivering reliable, high-quality software that meets user expectations and business requirements.

Blog Post: https://www.opkey.com/blog/integration-testing-a-comprehensive-guide-with-best-practices and https://www.testlearning.net/en/posts/integration-testing

From the blog CS@Worcester – Computer Science Through a Junior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Week 16 Post

This week’s blog post will cover System Testing and its main benefits. System Testing, as the name suggests, revolves around evaluating the entire system as a whole. It’s not just about scrutinizing individual components; it’s about ensuring that all parts seamlessly integrate and function as intended. This phase of testing comes after the completion of unit and integration testing, aiming to validate the system against its specified requirements. It involves subjecting the system to a barrage of tests to assess its compliance with functional and non-functional requirements. From testing the user interface to examining performance metrics, System Testing leaves no stone unturned in the quest for a robust and reliable software product. This method is most effective before launching your product, to ensure a total coverage.

Security vulnerabilities can be a project’s nightmare. System Testing acts as a guardian, identifying security loopholes and ensuring the system is robust against potential attacks. One of the key tenets of System Testing is its focus on real-world scenarios. Instead of merely verifying technical functionalities, System Testing endeavors to simulate user interactions and workflows. By replicating typical usage scenarios, testers can unearth potential bottlenecks, usability issues, and even security vulnerabilities lurking within the system. Through testing and analysis, it offers valuable insights into the system’s readiness for deployment. Moreover, System Testing serves as a safeguard against post-release hurdles by preemptively identifying and preventing potential pitfalls.
System Testing does have its cons however, one crucial step in system testing is creating a comprehensive test plan. This is crucial for effective System Testing because it ensures all bases are covered and avoids blind spots.

Like most of the testing techniques we have covered in class, tools play a pivotal role in streamlining the testing workflow. From test automation frameworks like Selenium and Cypress to performance testing tools like JMeter and Gatling, there’s a plethora of tools available to expedite the testing process. Leveraging these tools not only enhances efficiency but also empowers testers to uncover hidden defects more effectively.

System Testing stands as a cornerstone of software quality assurance, offering a panoramic view of the system’s functionality and performance. While it may pose its fair share of challenges, the insights gleaned from System Testing are invaluable in ensuring the delivery of a high-quality, robust software solution. By embracing System Testing, you’re essentially investing in the quality and reliability of your software. It’s the final hurdle before launch, guaranteeing a smooth user experience and a successful project.

Blog Post: https://blog.qasource.com/what-is-system-testing-an-ultimate-beginners-guide

From the blog CS@Worcester – Computer Science Through a Junior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Week 15 Blog

This week I decided to revisit Behavior-Driven Development, I’ve chosen this topic because, like Pairwise and Combinatorial Testing, understanding BDD prior to engaging in any related activities can significantly enhance your comprehension and application.

Behavior-Driven Development, or BDD, is a technique that aims to align software development with the expected behavior of the system. It focuses on making sure everyone is on the same page regarding how the software should work. Instead of just writing technical tests, BDD emphasizes describing the system’s behavior in plain language that anyone can understand. BDD promotes test automation by turning scenarios into executable specifications. Automated tests are written to verify that the system behaves as described in the scenarios. Instead of dry lists of requirements, scenarios showcase how users will interact with the app.

BDD is a collaborative approach that gets everyone speaking the same language. By focusing on how the app should behave, it reduces misunderstandings, catches bugs early on, and keeps your project on track. Additionally, by involving stakeholders in the creation of scenarios, BDD fosters ongoing communication and feedback throughout the development process. This collaborative approach helps build a sense of ownership and accountability among team members, which ultimately leads to better
outcomes.
A pro to BDD is the scenarios and examples you create will double as living documentation. This means the documentation stays up-to-date with the evolving app, reflecting the latest features and functionalities.

Of course, Behavior-Driven Development has its cons, One common challenge is writing effective scenarios that accurately capture the desired behavior of the system. It can be tough to strike the right balance between too much detail, which can lead to fragile tests, and too little detail, which can result in ambiguity. Additionally, BDD requires keeping scenarios up to date as the system evolves. We are constantly coming up with new features for our projects, and scenarios may need to be updated to reflect these changes. Its important to note, BDD is a powerful tool, but it’s not a foolproof solution. There’s still room for miscommunication, so clear communication is key. BDD promotes clear communication, but it doesn’t eliminate the need for it altogether.

In practice, teams often use tools like Cucumber, SpecFlow, or Behave to implement BDD. These tools provide features for writing and executing scenarios, as well as generating reports to track the status of tests. By using these tools, teams can streamline the BDD process and ensure that scenarios are written and executed efficiently.

Blog Post: https://semaphoreci.com/community/tutorials/behavior-driven-development

From the blog CS@Worcester – Computer Science Through a Junior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Week 13 Blog

This week’s blog post topic covers Pairwise and Combinatorial Testing. I chose this topic because we will soon cover it in class and having some background information prior to any activities involving this method will be useful to relate back to.

Pairwise Testing, also known as All-Pairs Testing, focuses on efficiency by testing every possible pair of input parameters, rather than every single combination. For instance, if you have a form with fields for name, email, and phone number, Pairwise Testing would cover combinations like name and email, name and phone number, and email and phone number. It’s a straightforward way to catch potential bugs without an overwhelming number of test cases. With Combinatorial Testing, it builds on Pairwise Testing by considering combinations of three or more parameters together. Using our form example, Combinatorial Testing would include triples like name, email, and phone number. This comprehensive approach aims to uncover bugs that might only appear with specific combinations of inputs. This testing method aims to optimize efficiency and coverage. Software testing can be time-consuming, especially with numerous parameters and scenarios. Pairwise and Combinatorial Testing streamline the process, allowing you to detect more bugs in less time.

The key benefits to this method is it helps in reducing the number of test cases needed to achieve “good” coverage. Instead of exhaustive testing, you’re strategically covering the most important combinations. Secondly, it helps in identifying interactions between parameters that might lead to unexpected behavior. By testing these combinations, you’re better prepared for real-world usage scenarios.

Of course, there are disadvantages to Pairwise and Combinatorial Testing. One, It can become tedious due to the large number of test cases required to cover all input combinations. Two, It relies on the interaction of pairs of parameters to determine outcomes, but this assumption may not always hold true, potentially missing bugs. And three, additional tests might be necessary to complement pairwise testing, adding extra time and effort to the testing process.

The main challenge when using this method is selecting the correct input parameters. The choice of relevant parameters impacts software behavior. Careful selection ensures thorough test coverage and defect detection. However, accurately determining parameter interactions is equally as difficult, because it could potentially result in the selection of incorrect combinations.

Some of the tools used by teams are PICT, IBM FoCuS, ACTS, Hexawise, Jenny, etc. These tools help automate the test case design process by generating a compact set of parameter value choices as the desired test cases. This is done by applying the all-pairs testing technique, which involves testing all possible combinations of two parameters.

Blog Post: https://testsigma.com/blog/pairwise-testing/

From the blog CS@Worcester – Computer Science Through a Junior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Week 11 Blog

This week’s blog will cover the main purpose of Object Oriented Testing and its usefulness. You most likely have heard the term “Object Oriented Programming”, which refers to a style of programming that utilizes classes, abstract classes, inheritance, polymorphism, concurrency, and more as a way of organizing code. These tools can be useful when dealing with multiple lines of code because it can be broken down into multiple files creating a more organized and readable product. Having a file with thousands of lines of code is a developers nightmare. Object Oriented Testing aims to test these systems and ensure they are behaving as expected. It is possible to have too much inheritance in a program, making it difficult to find where a piece of code is located, slowing down development. Unlike other test methods that primarily test function behavior, Object Oriented Testing analyzes the behavior of the entire class and its interactions with other files.

There are multiple techniques to Object Oriented Testing: Unit Testing, Integration Testing, Inheritance Testing, Polymorphism Testing, and Encapsulation Testing just to name a few. Unit testing refers to testing of individual components of the class before testing the interactions it has with other classes. Initially testing the classes functions will prevent scenarios where you can’t locate the bug in the program because there are too many inherited classes. An example is testing each function and ensuring the behavior. Integration testing refers to testing objects of different classes and ensuring they behave properly with all the components. Inheritance testing aims to test the relationship between parent and child classes. This technique of testing also tests overridden functions are properly implemented and are actually overriding the function. Polymorphism testing aims to verify that objects of different types can be used interchangeably. This type of testing ensures the behavior across all types of objects. Encapsulation testing tests access control and ensures the data being accessed is allowed to be accessed by the user.

The main benefits of running these tests is to detect defects early on rather than later in the development process. For this reason, it’s recommended to run tests throughout development. Object Oriented Testing ensures our project is modular, making it easier to maintain. In addition, it becomes easier to implement new features and classes without impacting existing code. Due to the never-ending demands of modern applications and the ever-evolving tech industry, the scalability of a program is crucial.

Blogs chosen: https://medium.com/@hamzakhan522001/object-oriented-testing-1f9619da40d0
and https://www.h2kinfosys.com/blog/object-oriented-testing/

From the blog CS@Worcester – Computer Science Through a Junior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Week 8 Blog

This week’s blog topic was chosen simply because I had zero knowledge of the term/topic. Jake Holy’s blog on Stochastic and Property-Based Testing blog covers the uses of each testing and effective tools used to implement each. Stochastic testing refers to the testing method that uses random inputs to test the behavior of the program. Rather than using predetermined inputs for test cases, stochastic utilizes various inputs to find unexpected bugs in the program. The main purpose of stochastic testing is to uncover errors through unorthodox methods. This method offers a wide testing coverage by testing all possible scenarios that might go unnoticed with traditional testing.

Property based testing refers to the method of testing by describing what the software should do rather than creating distinct test cases. Initially, testers establish properties that describes what the code should always do. This method of testing utilizes tools that automatically generate test cases to test these properties. After generating these cases, random inputs are utilized to ensure the properties are followed. The benefit of this testing method is testers are only required to declare the properties the code must follow. Instead of manually creating test cases and risking the possibility of missing a test case, these cases are generated for them. Property based testing guarantees effective test coverage. Additionally, this form of testing allows testers to focus on the abstraction and behavior of their code, allowing for a more robust product.

The blog goes in-depth with various testing tools, for example, a testing library created for the Haskell programming language called QuickCheck, which allows developers to write properties for their code which is automatically generated into test cases. QuickCheck has become a very popular tool to use for testing due to its automation, which is why it has been ported to other programming languages like Python and Scala.

The second tool that the blog mentions is Stimulant for Clojure. Similar to QuickCheck, Stimulant allows developers to define properties and it automatically generates test cases to test your program.

One thing that is mentioned in both testing libraries is shrinking. Shrinking refers to the method of finding the smallest input of a failing case, allowing developers to precisely pinpoint areas in the program that is causing the test case to fail. It is crucial that developers know their program has failed a test case, but it is even more crucial that they know why, so they can effectively fix the problem.

Blog: https://blog.jakubholy.net/2013/06/28/brief-intro-into-randomstochasticprobabilistic-testing/

From the blog CS@Worcester – Computer Science Through a Junior by Winston Luu and used with permission of the author. All other rights reserved by the author.