Category Archives: Week 9

Round Earth, Round Testing

As class has progressed this semester, we have learned several testing methods and strategies. While perusing the internet, I found another intriguing test strategy not discussed in class called the Round Earth Test Strategy. This testing strategy turns the test automation pyramid on its head by adding spheres to the triangle.

At the core of these spheres is the Earth, that classic blue sphere we all know and love. We then add concentric spheres around the Earth sphere representing the different levels of testing. Add in some static and dynamic elements such as data and from this we can then take a big slice and lay it out, giving us our Round Earth Test Strategy. One will notice that it does, in fact, turn the test automation pyramid on its head as the base of the triangle is now on top. This maintains that during testing our attention must be towards the surface and the user experience. It also takes our attention away from the lower part of the pyramid which represents the lowest level of the technology, something we rarely test or even can test for that matter. The Round Earth analogy lends itself to good analogies and to deep reasoning as the analogy greases the mental wheels gets the gears turning. The presence of data in the test strategy reminds the testers about data and the model can show testing problems that can occur at multiple levels. Lastly, the model reminds the testers about testability.

The first thing that struck me about this model is that it keeps the user in the forefront of the testing. The author reminds the reader that the worst person to test a program is the developer themselves. The “curse of expertise” puts the developer in the “underground” of the model while the users are living on the surface of the model, a different world compared to the world the developers are in.

Another interesting thing that struck me was that the model puts an emphasis on data. Just recently, in class, we did some work with a half-finished program and tested it using dummy data. What strikes me was that while testing there was little to no concern for what the data actually meant and we only cared in the program compiled and passed the tests, and the Round Earth method puts emphasis towards that data.

The last bit that struck me was that this model lends itself well to the analogy and that makes it easier for people to understand. While this isn’t necessarily helpful for a good model it is always helpful when trying to explain it to your colleagues for a second opinion!

From the blog CS@Worcester – Computer Science Discovery at WSU by mesitecsblog and used with permission of the author. All other rights reserved by the author.

Journey into the top software testing technique & tools for building software

As I take another step towards Software Quality Assurance Testing. I dive into the blog I found for this week blog post “Top Software Testing Techniques & Tools For Building Working Software” by Ekaterina Novoseltseva.  In this blog I learn about the 4  top testing techniques and tools for building working software. Software testing is very important for program development because it helps reduce the risk, time, and cost that comes with developing a new software.

Top 4 testing techniques important for building software

  1. Unit Testing
    • Is used to test each small part of the software system to make sure it is working properly and does what is expected. “The goal of unit testing is to analyze each small part of the code and test that is working correctly.” as put in the blog I am summarizing. The following are a few of unit testing tools:
  2. Integrated Testing
    • Is the act of combining unit test with each other in order to test the program until it is possible the majority of the program together. The way it works is by integrating more than one unit test that leads to a component and as it keeps adding to it becomes an even larger part of the program in order to see how well each unit runs together and how the software program will function. Another thing to note about integrated testing is “that integration test is also about testing units with databases or other external third-party libraries.” as dictated by the blog I am talking about. The following are a few of integration testing tools:
  3. Functional Testing
    • Is important for testing the quality of the software program and making sure it does what it is intended to do and function the way it should when ever the program is used by its users. “Testing is used to verify that your designed application, website, software executes its functions through a proper response to user commands, a consistent user interface, integration with other systems and business processes, and proper handling of data and searches.” As put by the blog I am blogging about. The following are the functional testing tools:
  4. Performance Testing
    • Is the testing process that test how well the software program performs under a certain workload by testing how well it respond and checks the software program stability. It also test the quality of the software by reassuring that the system not only perform as intended but that it’s reliable as well. The following are a couple of performance testing tools:

This wrapps up the 4 top testing techniques and tools for building software. For more information on this read the blog  “Top Software Testing Techniques & Tools For Building Working Software” by Ekaterina Novoseltseva. This has been YessyMer in the World Of Computer Science, until next time. Thank you for your time.

From the blog cs@Worcester – YessyMer In the world of Computer Science by yesmercedes and used with permission of the author. All other rights reserved by the author.

Sanity Testing

Sanity Testing

As a follow, up to my Regression testing post a few weeks ago, Sanity testing is almost a subset of it because it is performed when we do not have enough time to do the testing. It is used to carry out the checking on whether the bugs reported in the previous build are fixed with regression being introduced to fix these issues not breaking any of the previously working functionality. Sanity checks to see if the functionality is working as intended/expected instead of doing the entirety of regression testing. The test helps avoid wasting time and cost involved if the build has failed. After the regression introduction has been completed Sanity starts to kick into full swing. With it checking the defeat fixes and changes done to the software application without breaking the core functionality. It is usually a narrow and deep approach to testing, needing to concentrate limited and main features of testing in detail. Sanity testing is usually non-scripted helping to identify the dependent missing functionalities. It is used to determine if sections of the application are still working after minor changes.  The objective of this testing is to not verify thoroughly the new functionality but rather to determine if that the developer has applied some rationality while producing the software. For example, if your scientific calculator gives the result of 2+2 = t! then there is no point in testing the advanced functionalities like sin 30 + cos 50. Sanity testing is usually compared to Smoke Testing which is used after software building is completed to ascertain the critical functionalities of the program is working fine. But smoke testing is for another topic for another day. All and all the website I looked at was very insightful into what sanity testing is and what it does.

https://www.guru99.com/smoke-sanity-testing.html

From the blog CS@Worcester – Matt's Blog by mattyd99 and used with permission of the author. All other rights reserved by the author.

Let’s Put It to Rest: What is REST?

We’re preparing for a month-long final project in our Software Design, Construction, and Architecture class, which involves a static web app that utilizes a REST API. We’ve been going over what a REST API entails in some of our lectures, but I also wanted to read more about it so we are more ready for our project to begin. The tutorial that I used can be found at https://restfulapi.net/.

REST stands for REpresentational State Transfer, and it essentially defines a set of guidelines, or constraints, that a system must follow in order to denote a “scalable, fault-tolerant, and easily extendable system” (from http://restcookbook.com/Miscellaneous/rest-and-http/). So, a REST API can successfully handle client server requests and subsequent server responses by following 6 guidelines that are part of a RESTful system.

  1. Client-Server – This is associated with the separation between client interface operations and server storage operations without them knowing about each other. This improves scalability of the system and flexibility of the client interface across different platforms.
  2. Stateless – This means that the client and the server can still successfully communicate without taking advantage of any context outside of each request. In other words, both the client and the server do not need any prior requests or responses to still properly complete communications.
  3. Cache-able – Data in a response from the server can be labelled as cache-able or non-cache-able; when cache-able, data from the response can be reused in other requests.
  4. Uniform Interface – This simplifies the architecture of the system by separating its services into independent components: identification of resources (which requires each request to specify which resources, or data, it needs; this is done through the endpoint of a URL), resource manipulation through representation (which ensures that enough information is provided in the request to modify the resources presented), self-descriptive messages (which gives a message to the server that describes how to process the request), and hypermedia as the engine of application state (which provides information about navigating to each resource provided in each response from the server; for example, a link to the localhost may be displayed when returning a response about a local resource).
  5. Layered System – This presents various services of the system in a hierarchical manner so that each component in distinct layers can only see the layers they immediately interact with.
  6. Optional Code on Demand – This optional constraint allows the server to extend functionality of the client by responding with executable code to be run by the client, such as a JavaScript file or Flash application.

This article definitely helped me understand more about how the communication between a local client and a remote server works. I’m very excited for our final project to begin, especially now that I know more about the process behind it!

From the blog CS@Worcester – Hi, I'm Kat. by Kat Law and used with permission of the author. All other rights reserved by the author.

The Testing (Automation) Pyramid

The Testing Automation Pyramid (sometimes just called the “Testing Pyramid”) is an Agile methodology that involves a heavy reliance on Unit Tests as opposed to UI-based testing. It’s referred to as a “pyramid” because the width of the layers in the pyramid refers to the ideal number of tests per layer. The bottom, widest layer is Unit Testing, the middle layer consists of service-related tests, and the tip of the pyramid represents UI Tests. A picture of the structure is shown below, sourced from Martin Fowler’s blog.

test-pyramid
Moving up the pyramid, the type of testing done costs more resources to execute and more time to assess.

Part of the reason why UI testing takes significantly more resources is because it involves interacting with and logging the operations of the interface in order to see where something may go wrong. Running through the operations of the UI to see where it breaks takes a very long time, as there are so many levels of abstraction between the UI and the underlying infrastructure, and so much functionality within User Interfaces. On top of this, as the product is updated and refactored, tests derived from the original UI interface becomes obsolete, making the lifespan of higher-level tests shorter than those aimed towards the low-level structure of the system.

This is why Unit Tests are so good. They’re a cheaper (in terms of both resources and time) alternative, and they provide a means of testing the actual root of the issue. Unit Tests are preventative and they ensure that a bug presenting itself in the UI level stays fixed, rather than allowing the bug to manifest itself in a variety of different ways.

There is quite a bit of criticism of this model, as if it goes unchecked it ends up as the so called “Ice Cream Cone” anti-pattern, where a small layer of exploratory manual testing above the UI testing layer ends up becoming mandatory manual testing, which becomes both a resource and a time sink. This is far better explained by viewing Alister Scott’s blog post on it over at his blog, WatirMelon.

Martin Fowler, whose blog I found this concept on and referenced earlier, summed it all up in a pretty wonderful way: “If you get a failure in a high level test, not just do you have a bug in your functional code, you also have a missing or incorrect unit test.” Martin’s blog post elaborates on the idea much better than I’m able to, and his blog is extremely readable and has really great content. I’ll definitely be using it as a resource in the future.

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

Summary of an Agile Frameworks Post

http://thetesteye.com/blog/?utm_source=fuel&utm_medium=referral&utm_campaign=lrfuel

This week I read the introductory blog post from The Test Eye, written by Martin Jansson in 2017 as a description of the authors goals and intentions as he writes further posts. In the post linked above, Jansson describes the pros and cons to agile frameworks in his opinion as a professional tester.

Firstly, Jansson states that a main advantage in an agile organization is that no single person bears the blame for the overall quality of the final product, if it happens to fail or come short of expectations. It takes a team where the result is everybody contributing their own level of effort into the project, which is a very good concept for a team building environment. However, Jansson believes there can still be improvements within the framework.

Jansson expresses that these frameworks are meant to be altered and changed dynamically to fit the specific situation appropriately. If the agile transformation is followed exactly, there will be circumstances where it will fail since they have limitations, so its important to be flexible in that way with this kind of environment.

He continues by explaining that coaching in an agile organization also proves difficult fundamentally, since no one person can be an expert in all necessary fields. Rather, its more efficient for this expertise to come from within each department to guide implementation. This approach is supported by the fundamental concepts of an agile framework, where change and improvement over time are stressed.

Jansson then critically assesses the inclusion of testing in frameworks such as SAFe. He believes they could improve by including more various testing material, since currently there isn’t much of any content about testing. He also shares that testing is undervalued currently and is misrepresented as something “everyone is expected to work with.”

Although he has these criticisms, Jansson still admires many good ideas and material laced throughout the frameworks. His goal is to suggest improvements and adjust them as agile transformation continues into the future. I will continue to follow these posts since the writing style is interesting and informative, I suggest to The Test Eye to anybody who hasn’t heard of it already.

From the blog CS@Worcester – CS Mikes Way by CSmikesway and used with permission of the author. All other rights reserved by the author.

Choices

This week I read a post of Joel Spolsky, the CEO of Stack Overflow. This post talks about choices or options that software designers give users during they are using programs to accomplish their tasks.  Pull up the Tools | Options dialog box and you will see a history of arguments that the software designers had about the design of the product. Some programs ask users to make so many choices. Even most users don’t understand much about or see it unnecessary to make some of the choices; as a result, users will be distracted or confused. Asking the user to make a decision isn’t in itself a bad thing. Freedom of choice can be wonderful. People love to order espresso-based beverages at Starbucks because they get to make so many choices. The problem is that you ask users to make a choice that they don’t care about. Actually, users care about a lot less things than a software designer might think. They are using your software to accomplish a task. They care about the task and really want to accomplish it. If it’s a graphics program, they probably want to be able to control every pixel to the finest level of detail. If it’s a tool to build a web site, you can bet that they are obsessive about getting the web site to look exactly the way they want it to look. They do not, however, care one whit if the program’s own toolbar is on the top or the bottom of the window, or how the help file is indexed. They don’t care about a lot of things, and it is the designers’ responsibility to make these choices for them so that they don’t have to. Sometimes, it is very hard to make a choice between conflicting requirements and the designer couldn’t think hard enough to decide which option is really better. Therefore, designers try to abdicate their responsibility by forcing the user to decide something, they’re probably not doing their job. Someone else will make an easier program that accomplishes the same task with less intrusions, and most users will love it.

A major rule of user interface design: “Every time you provide an option, you’re asking the user to make a decision.” That means users will have to think about something and decide about it. User’s option is not necessarily a bad thing, but, in general, you should always try to minimize the number of decisions that users have to make. This doesn’t mean eliminate all choice. There are enough choices that users will have to make anyway: the way their document will look, the way their web site will behave, or anything else that is integral to the work that the user is doing and really cares about.

Article: https://www.joelonsoftware.com/2000/04/12/choices/

From the blog CS@Worcester – ThanhTruong by ttruong9 and used with permission of the author. All other rights reserved by the author.

How to design a REST API

I was looking for information about REST API (Representational State Transfer), I came across on this blog “How to design a REST API”. I found this blog to be useful, simple and has examples include. The blog and Quick Reference Card divided into 4 main sections.

First section is general concept, one of the main goals of API should be able to reach out to as many developers as possible. That mean API needs to be self-describing and simple. In security, how important is Oauth to manage Authorization, that being used by many companies. Which use the token system, without showing the real Authorization. API domain names are also important that need to be clear and straightforward.

Second is URIs, RESTful API use different approach using concrete names and not action verbs. API needs to have case consistency and plurals to make the structure clear. Versioning should be mandatory in the URL. Crud-like operations explain how HTTP Verb uses (GET, POST, PUT, DELETE) as CRUD action (READ, CREATE, UPDATE/CREATE, DELETE) with examples.

Third section is about query strings, combined API design such as filters, pagination, partial responses and sort to give query better result, also to optimize bandwidth. We can use a range query parameter. API response should contain Link, Content-Range, Accept-Range.

Finally, the Status Codes, developers will come across this. We need to return error for common case, which everybody understands. The HTTP return codes is which highly recommended for this. First case is success, which mean the API accepted the request. Second is client error, when client requests but API couldn’t accept, reasons such as 401 Unauthorized, 405 Method Not Allowed … There also Server Error, where the request is correct, but a problem occurred on the server. System will return as Status 500 Internal Server Error. This return is somewhat common, the client cannot do anything about that.

I like this summary of practices, its cover a large amount of good detailed references. The Quick Reference Card is clear and always have good examples next to it. I find that much easier to understand, since they use examples from popular sites such as Google, Facebook, Twitter. Everyone is familiar with those companies and based on the instruction, that make the examples are more reliable.

From the blog CS@Worcester – Nhat's Blog by Nhat Truong Le and used with permission of the author. All other rights reserved by the author.

Testing & Agile

For this week’s blog post, I decided to find a blog post on a topic that we haven’t explicitly covered in class. I found an article on Agile Testing and my interest in Agile as a whole drew me in. This article is on QASymphony and breaks down what Agile methodology is, some examples of Agile testing, and how to align testing with the Agile delivery process. For the sake of this post, I will assume you’re familiar with the general concept of the Agile methodology for software development, hopefully you understand the general concept of Scrum and Kanban.

One of the interesting testing strategies for Agile development is the Behavior Driven Development tests (BDD). These tests are similar to the Test Driven Development (TDD) style testing systems in traditional waterfall development cycles. They are basically a replacement. Instead of writing unit tests before code is written, the BDD tests are on a much higher level. This is how user stories are written. The development of the code is based on end-user behavior and the tests need to be readable for those who might not be particularly technical as they can often replace requirement documentation. This saves time in the long run as there is no duplication of the process for those who might not be able to read user tests in the traditional TDD style of testing. The best part about BDD testing is that the tests do not necessarily need to be written by technical team members. These can, and are often written with input from business partners, scrum masters, and product owners who might not necessarily be able to contribute when it comes to writing unit tests in the TDD format. This style also allows for testing small snippets of functionality like TDD so that one aspect of TDD that is somewhat Agile remains intact for BDD testing.

After reading through the other testing strategies and how to align testing with the Agile methodology, I realized that I’ve already been working in systems that operate like this at my summer internships. We operated our testing in a format pretty much identical to the concept of BDD where everyone in the team contributed to the different testing obstacles and even came up with test cases that would need to pass by code that was written afterwards. And like the article says, it was common for not all of the tests to pass immediately, and this is part of the concept of “failing fast” that Agile is all about. I am glad I found this article to give me more insight to how businesses around the world, including the ones I interned at, are converting to Agile and utilizing these testing strategies if they haven’t already.

Link to original article: https://www.qasymphony.com/blog/agile-methodology-guide-agile-testing/

 

From the blog CS@Worcester – The Road to Software Engineering by Stephen Burke and used with permission of the author. All other rights reserved by the author.

Journey into Software Architecture and Its Benefits

As I take another step towards my journey in software C.D.A. I dive into Software Architecture and its importance and benefits. For this weeks blog I found a blog named “15 BENEFITS OF SOFTWARE ARCHITECTURE” by  Ekaterina Novoseltseva. This blog talks about the benefits of software architecture. I will sum the main points of that blog and what I’ve learned from it.

What is something new I learned from this blog?

From reading the blog “15 benefit of software architecture” I have learned the 15 reason why having software architecture is important. I have also learned that not only its important in a sense it can be considered as the foundation for having a strong software program.

What is Software Architecture? 

The idea of software architecture is that it is considered to be the blueprint for building a software program. Usually the software architecture is the step where deciding what design would be implemented on the software and what each team member would be implementing to the program. “Architecture is an artifact for early analysis to make sure that a design approach will yield an acceptable system. Software architecture dictates technical standards, including software coding standards, tools, and platforms.” as stated by Ekaterina Novoseltseva blog.

The following are the 15 Benefits of Software Architecture:

  1. “Solid foundation” – when creating a program or project having a solid foundation is part of the Software Architecture.
  2. “Makes your platform scalable.”
  3. “Increase performance of the platform”
  4. “Reduces cost and avoids code duplicity”
  5. “Implement a vision” – where a software architecture provides the big picture.
  6. “Cost saving” – helps point out areas where money can be saved within a project.
  7. “Code maintainability” – helps programmers be able to maintain the code within a project.
  8. “Enable quicker changes” – able to change program at a fast past as what the program must do changes.
  9. “Increase quality of the platform” – making the software quality incredibility better.
  10. “Helps manage complexity”
  11. “Makes the platform faster.
  12. “Higher adaptability”
  13. “Risk management” – helps reduce the risk or chances of failure.
  14. “Reduces development time.”
  15. “Prioritize conflicting goals”

The list above is the benefits to software architecture. Through the benefits we can see the importance of software architecture. Reading this blog has made me realize that software architecture is a very important step when developing a software system/program. This has been YessyMer in the World Of Computer Science, until next time. Thank you for your time.

From the blog cs@Worcester – YessyMer In the world of Computer Science by yesmercedes and used with permission of the author. All other rights reserved by the author.