Category Archives: Week-14

The Value of Testing

The blog post How Do I Know My Tests Add Value? by AutomationPanda discusses the value of proper testing. The author brings up the common issues that sprout from use a bug-fixed count as a metric for successful testing, and highlights why testing is important and can help improve development.

Testing is important because it validates that your program is working as it is expected to. Well-written tests with the proper amount of coverage give you information about your program and whether any changes need to be made. A passing test indicates correctness and that your program is working as expected. A failing test points out a bug that needs to be fixed and helps highlight weak parts of your code.

The absence of testing might not necessarily lead to bugs, but we all know as programmers that even when it seems your code is running exactly as you intended it to, logic errors can happen and sometimes things weren’t written with every situation in mind. When testing is in being implemented, developers are accountable for the code they write, and must think carefully about the issues that may arise.

Tracking bugs isn’t necessarily effective because it encourages testers to find issues even when there may not be any. All tests passing is still good news and a sign of progress, even if it just means you are doing everything right. Enforcing bug quotas and forcing testers to find issues means they will expend effort looking for things that aren’t there and nitpicking small issues rather than writing more tests or making sure they have good coverage, which are far more effective and finding issues in your code.

At the end of his post, the author suggests some different metrics to use: time-to-bug discovery, coverage, and test failure proportions. All of these serve as much more accurate and effective measures of determining whether your testing has value. Whatever metric you use, it is important to think about why you are testing and whether you are doing it in a way that tells you something about your program.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.


On his blog, Mikke Goes takes the time to explain the Model-View-Controller design pattern in his post What is the Model-View-Controller (MVC) Design Pattern?. He uses the analogy of an ice cream shop to describe the different functions of the components. The waiter is the view, the manager is the controller, and the person preparing the ice cream takes the role of the model. Together, when a customer makes an order, they each can perform their responsibilities and successfully handle the customer’s request.

The Model-View-Controller design pattern separates the components of your code into sections that divide logic from interface. Keeping the functionalities separate from each other will make your application easier to modify in the future without running into issues. Each of these different groups has a different responsibility when it comes to the application and how requests are handled.

The view consists of the parts of your application that your user will see and interact with. It is not very smart, only outputting the information given to it by the controller. The view helps users make sense of the logic behind your application and interface with it.

The model is the opposite, dealing with all the logic and data manipulation behind wheels of your application. The model responds to requests by processing any data in the necessary ways and giving it back to the controller in a form the view can understand.

The controller handles the communication and interaction of these two. When a request is put in through the view, the controller brings this to the model, and takes the model’s output back to the view to be displayed. It is a middleman that helps connect the two other layers of responsibility.

The Model-View-Controller design pattern seems like a pretty simple design pattern to comprehend. All of the components are divided by responsibility and the program is written with this in mind, making sure that only certain components handle tasks that are within their category. In terms of our projects, the front-end would amount to the view and the back end to the model, with the typescript file functioning as the model. It seems like the development of web applications would sort of naturally fall into this design pattern.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.

Object Oriented Programming

I am writing in response to the blog post at titled “Your Code: OOP or POO?”

Most of the code I have ever written has been founded in the imperative programming paradigm. I began pulling concepts from object oriented programming in the last few years to help organize large projects and keep the structure of things more adaptable. It is a useful way to develop a large framework, but relying on objects for absolutely everything seems like it would tend to cause a lot more problems than it would ever be meant to solve.

This blog post refers to “POO” as “Programming fOr Others”, as opposed to plain Object Oriented Programming (OOP). One of the purposes of object oriented programming, and one of the reasons it is such a dogmatically adhered-to convention, is that it makes it a lot easier for people who did not write the code to understand what it is doing and work with it themselves. The blog post discusses on the topic that object oriented programming can be over-used, and it is not the only solution to writing code that is readable and easy to be understood by others.

There was an excerpt about how there was a type of programmer who would only write five or ten lines of code, preceded by twenty lines of comments, and object oriented programming basically allows the two to be combined. Instead of concise code accompanied by an explanation, there is a large file filled with descriptive language embedded into the structure of the code itself.

Another excerpt mentioned the use of object oriented programming for trivial tasks. I do not think that it makes any sense to go through the effort of supporting scalability and maintainability for something that can be started and completed and discarded so easily without the extra work. A simple command line in an imperative programming language would be far more practical for basic tasks.

Part of the argument made is that a programmer should focus on the principles of object oriented programming, rather than the name. Encapsulation, simplicity, code re-use and maintainability are the main ideas, not just objects.

From the blog CS@Worcester – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

Using Swagger for API Documentation

As we are nearing the end of the semester and about to finish our projects, I’ve been thinking more about the documentation process for the different parts including the frontend and backend. With this in mind, today I found a great topic about documentation generation for REST APIs with a tool called Swagger. I’ve never heard of Swagger before today, but it is a useful framework that allows for testing, documentation, and other useful features for building APIs. This post shows a quick little tutorial of how to implement Swagger for documentation generation with a Spring Boot project that is very similar to my project and the example REST API order system we have been using. It is a relatively simple process that includes adding the necessary dependencies for Swagger and adding a controller class for it, then all you have to do is just add the necessary documentation statements for each controller and requests within the controllers. The end result is a very nice-looking HTML page that displays a well formatted layout which includes the documentation for your API backend with a graphical display for each request and all the information associated with the request such as body and return information. Now as I was reading this article and this new way of creating documentation, I was comparing it to the way we’ve been doing it so far with a simple table written in Markdown for all of our API endpoints. The output of doing it in Markdown was nice but writing it was a tedious task with the formatting of the table. I much prefer the simplicity that Swagger allows you when adding a new endpoint. I also like the final product that Swagger produces a lot more than the simple Markdown document. In the future if I am creating another Spring Boot project, I am going to try to use Swagger from the start for documentation instead of using a Markdown document with a table for a basic readme as it appears that using Swagger makes adding new endpoints much less tedious with formatting. I would also like to try to add this to my current project if there is enough time, and also see if it is possible to use it with Angular too.


From the blog CS@Worcester – Chris' Computer Science Blog by cradkowski and used with permission of the author. All other rights reserved by the author.

A Unique Idea for Designing Tests

Katrina Clokie, author of “A Practical Guide to Testing in DevOps“, offers us her unique insight into developing effective tests in her blog post “Generic Testing Personas“. In this article, Clokie explains how developing personas can be helpful in modeling expected behavior of users. This information is very valuable when designing different tests.

Clokie begins by explaining how good tests should cover all possible behaviors, to make sure that the software being tested is as adaptable as possible for different individuals. Developing “personas” for expected or stereotypical behavior can give an informed perspective when designing tests.

When designing personas, each individual should have clearly distinct behavior patterns and needs from one another. In this article, the author gives us an example of six personas that could be helpful when writing tests. Here I will just briefly describe two that I feel compliment each other and demonstrate the point nicely.

“Manager Maria” is a persona who is always busy and rushes through the software, consistently using shortcut keys, making mistakes, and constantly going AFK while using the program. For example, Maria might be frustrated with slow response times, so the tester ought to make sure the software is running smoothly even during times of high traffic.

In contrast, “Elder Elisabeth” has trouble with new technology and may require simple user interfaces, need to zoom far in, or may need to reach out for assistance. In Elisabeth’s case, the tester should make sure the program is visually stable, and can be run on older systems.

Both of these personas are stereotypes of real users who have different needs and behaviors. The more defined the characteristics of each persona, the more information about their needs can be inferred. It is the responsibility of both the developer and the Software Quality Assurance professional to make sure all of these different needs and desires are met to deliver the best possible product.

I very much enjoyed this article and I found Clokie’s perspective both interesting and helpful. I definitely enjoy and find important the application of heuristics in software design, and it makes sense that this knowledge would be helpful in the context of designing tests as well.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.

Parsing HTML and the Document Object Model

In A List Apart’s series “From URL to Interactive“, Blogger Travis Leithead describes in his article “Tags to DOM” how HTML documents are parsed and converted into the Document Object Model (DOM). Through the course of his post, Leithead explains encoding, pre-parsing, tokenization, and tree construction.

The first step to parsing an HTML document is encoding. This stage is where the parser must sort through each part of the input and sent from the server. Since all information computers handle is binary, it is the parsers responsibility to figure out how to interpret the binary stream into readable data.

After all the data has been encoded, the next stage is pre-parsing. During this step, the parser looks to gather specific resources that might be costly to access later. For example, the parser might select an image file from a URL tagged with a “src” attribute.

Next, the parser moves on to tokenization. This process is when all the markup is split up into discrete tokens that can be organized into a discernable hierarchy. The parser reads through the text document and decides which tokens to declare based on the HTML syntax.

Once all the document has been tokenized, each token is organized into what is called the Document Object Model, or DOM. The DOM is a standardized tree structure that defines the structure of the web page, organizing it into sections with a parent-child relationship. The root of the tree is the #document/<html> tag, and its children are <head> and <body>, the body may contain multiple children, and this organization contextualizes the data into a readable layout. The DOM also contains information about the state of its objects, so this allows for a degree of interaction that make complex programs.

I would highly recommend this article and this whole series to any budding developer such as myself, as understanding the entire process of how programs transform from code to an interactive website is integral knowledge. And the authors at A List Apart do a great job being both thorough and concise in their explanations. I definitely feel more knowledgeable after having read this article and found it valuable.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.

Journey Into The Top 7 Web Application Testing Practices for QA Professionals

As I take another step towards Software Quality Assurance Testing. The blog I will talk about is “Top 7 Web Application Test Practices for QA Professionals” from the TestingWiz  blog site.

This blog covers the top 7 web application testing practices for quality assurance professionals. It discuss how now and days mobile apps is being used more than desktop browsing. Which in terns is requiring enterprise to build more “mobile-friendly websites”, and because of that the QA professionals should be testing all aspects to ensure that website are performing more efficiently. It then goes on to list the 7 practices for skilled QA Testers to accelerate Web Application Testing with a brief description of each. Software testers must

  1. Concentrate on Cross-Browser Compatibility Testing
  2. Measure Application’s Performance Under Multiple Circumstances
  3. Test Each and Every Element of a Web Application
  4. Execute the Load Tests in Intervals
  5. Test Web Services Individually
  6. Pick Up the Right Factors for Usability Testing
  7. Work with an IT Professionals

After its listing of the 7 practices for skilled QA Testers to accelerate Web Application Testing. It goes on to mention that “TestingWhiz follows a thorough QA strategy that manages the unique challenges and requirements in relation to testing web applications. So, connect with our QA professionals to enhance and perfect the complete user experience of your application with our latest testing strategies.”

I found this blog to be very interesting and useful. I like that it gives you a list of things that should be tested to ensure that the application will work in various type of base that being either web, mobile or desktop and it also insure that the application work on different devices whether old or new. The best part is if you do not want to deal with having to do all of this testing you can contact their QA professionals that will help you out with the testing process. This is honestly useful if I am working on an application along with other work and I do not have time to concentrate on performing a quality assurance on my code. However, for the mean time I will insure I do the testing myself and make sure that I do follow these 7 practices for skilled QA Testers to accelerate Web Application Testing. I do not disagree with much of this blog. In fact I find it interested enough to recommend others to read it. the title of the blog above provides a link to the blog, I suggest you check it out. Thank you for your time. This has been YessyMer in the World Of Computer Science, until next time.

From the blog cs@Worcester – YessyMer In the world of Computer Science by yesmercedes and used with permission of the author. All other rights reserved by the author.

Path or No Path!


This week’s reading is about Path Testing. It is said that a vital part of software engineering, is to ensure that proper tests are provided such that errors or issues that exist within a software product will be resolved before it could turn into a potential costly threat to the product. By turning to path testing, it will help evaluate and verify structural components of the product to ensure the quality meets the standards. This is done by checking every possible executable path in the software product or application. Simply put, another structural testing method when provided with the source code. By this method, many techniques are available, for example, control flow charts, basis path testing, and decision to decision path testing. These types of testing include its fair share of advantages and disadvantages. However, path testing is considered a vital part of unit testing and will likely improve the functionality and quality of the product.

What I found thought-provoking about the content is the section on the significance of Path. By providing an understanding what the term “path” means will certainly break down the importance of this test. Knowing that path is likely describing a programs execution path, from initialization to termination. As a white-box testing technique, we can be sure that the tests cover a large portion of the source code. But it’s also useful that they acknowledge the problems that can be found while doing path testing. These errors are either caused by processes being out of order or code that has yet to be refactored. For example, leftover code from a previous revision or variables being initialized in places where they should not be. Utilizing path testing will reveal these error paths and will greatly improve the quality of the code-base. Also, I agree that path testing like most white-box testing techniques will require individuals who know the code-base well enough to contribute to these types of tests. Which also includes another downside where it will not catch other issues that can be found through black-box testing. This article allows me to reinforce what I had learned in class about Path Testing and DD-Path Testing.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

Test Yourself Before You Wreck Yourself

CS SERIES (14)Testing, testing. I may need your approval on this article I read by Software Testing Magazine on Approval Testing. Approval testing, as defined by this article, is a way of software testing that results in presenting the before and after of an application for a user (ex: software development team) to review it and potentially approve it. It’s more of a visual representation of testing and one of the major cons is how the results have to be checked manually.

Some testing tools mentioned include: Approval Tests, TextTest, Jest, Recheck, Automated Screenshot Diff, Depicted (dpxdt), and etc.

The main purpose of the software testing tool, using TextTest for example, is checking that the text output after running program from the command line in different ways.

What I found interesting is how a user can see that a test technically could have “passed” or “failed” but still decide to mark it as the other because they choose what feature they are looking for in the end. This makes it a little more flexible to use approval testing as it is more of a guide or guideline for a user instead of only seeing one word and then a short description of what could have gone wrong. I think this process is much more transparent or descriptive with a user about what could have gone wrong or what went right.

One way the content has changed how I will think about the testing is how there are so many more types of software or programs out there than we can imagine which help us better code or create our own software and programs. This one is especially good for visual coders and testers who like to see their results firsthand to compare what they are expecting with what they actually got.

Overall, I found this article was useful because it introduced me to thinking about a better way of logging the differences between what the reference result is versus the actual result. I did not disagree with any of it since it showed us how we can use approval testing to our advantage while still being honest about its limitations.


From the blog CS@Worcester by samanthatran and used with permission of the author. All other rights reserved by the author.

Top 5oftware Architecture

CS SERIES (13)When using architecture patterns, how will you know which one to choose? Peter Wayner, an independent author for TechBeacon takes five architectures that the majority of programs today use and broke them down into their strengths and weaknesses. Through this, it seems like he is hoping to guide users to selecting the most effective software architecture pattern for their needs.

If this article does not clear up enough information, Wayner also brings up a book, Software Architecture Patterns by Mark Richards, which focuses further on architectures commonly used to organize software systems.

The fives types discussed in the article are:

  • Layered (n-tier) architecture
  • Event-driven architecture
  • Microkernel architecture
  • Microservices architecture
  • Space-based architecture

The one I found most interesting is space-based architecture because at first when I thought of space, I was thinking of the other kind. The one with the sun and the stars and the moon. But then I realized–what does that have anything to do with software architecture? Space-based architecture is listed as “best for high-volume data like click streams and user logs” and I think this one is pretty important, especially during times like Black Friday and Cyber Monday. I personally experienced the frustration of not being able to access a site (adidas) due to high volume and it really does not help a business.

Another architecture I found thought-provoking is micro-services architecture because of the way Wayner introduced the concept, “[s]oftware can be like a baby elephant: It is cute and fun when it’s little, but once it gets big, it is difficult to steer and resistant to change.” The author providing an example of this made me think about how a site with so many users and things happening at once actually just had many different separate services but they were put together as one. I was a little surprised to think of Netflix in that way after all the times I’ve used it but it makes much more sense now.

Overall, I found all of the information pretty useful and clear to understand as Wayner described what they were and then listed the caveats and what they were best for. I would recommend using this as a reference or quick review of common software architecture designs if someone needs it.


From the blog CS@Worcester by samanthatran and used with permission of the author. All other rights reserved by the author.