Category Archives: Week 5

Flaky Tests

Flaky tests are tests that could pass or fail for the same code. This is a problem because the failure of a flaky test doesn’t always indicate a problem with the code, but you can’t just ignore the test because you could be ignoring a bug.

In the the blog post “Where do our flaky tests come from?”, Jeff Listfield, a Senior Software Engineer at Google, talks about the the potential causes of flaky tests and what can be done to avoid creating flaky tests. He demonstrates the correlation between the objective size of the test (binary size, memory usage) and the likelihood for it to be flaky. He also shows a correlation between certain tools and a higher rate of flaky tests, however, the reason for this is because larger tests are more commonly written using those tools. The tools themselves only contribute a small amount to the likelihood of a flaky test being created. When writing tests you should think about what code that you are testing and what a minimal test would look like in order to minimize the likelihood of creating a flaky test.

I chose this topic because effective use of test cases is very important in software development because it allows you to make sure that you’ve addressed and tested all of the product requirements, allows future testers to run your test cases when needed, and also makes it possible to build automated scripts to run as many tests as possible. By writing your test cases out, you also won’t need to constantly repeat the process and remember what values you’re testing every time as they’ll already contain all necessary variables, allowing you to maintain consistency in your tests.

This blog post in particular was interesting because it uses data gathered from real tests used in Google’s continuous integration system to show a cause of flaky tests and how to avoid them. Before reading this blog I didn’t realize how important writing test cases really was, but seeing just how many automated tests that Google used (4.2 million!) which lead to me doing more research on their importance. It also reminded me that the best solution is usually the simplest, in that you should remember to keep your test cases as simple as possible to avoid creating a flaky test and giving you a headache in the future trying to figure out what’s wrong.

Source: https://testing.googleblog.com/2017/04/where-do-our-flaky-tests-come-from.html

From the blog CS@Worcester – Andy Pham by apham1 and used with permission of the author. All other rights reserved by the author.

Flaky Tests

Flaky tests are tests that could pass or fail for the same code. This is a problem because the failure of a flaky test doesn’t always indicate a problem with the code, but you can’t just ignore the test because you could be ignoring a bug.

In the the blog post “Where do our flaky tests come from?”, Jeff Listfield, a Senior Software Engineer at Google, talks about the the potential causes of flaky tests and what can be done to avoid creating flaky tests. He demonstrates the correlation between the objective size of the test (binary size, memory usage) and the likelihood for it to be flaky. He also shows a correlation between certain tools and a higher rate of flaky tests, however, the reason for this is because larger tests are more commonly written using those tools. The tools themselves only contribute a small amount to the likelihood of a flaky test being created. When writing tests you should think about what code that you are testing and what a minimal test would look like in order to minimize the likelihood of creating a flaky test.

I chose this topic because effective use of test cases is very important in software development because it allows you to make sure that you’ve addressed and tested all of the product requirements, allows future testers to run your test cases when needed, and also makes it possible to build automated scripts to run as many tests as possible. By writing your test cases out, you also won’t need to constantly repeat the process and remember what values you’re testing every time as they’ll already contain all necessary variables, allowing you to maintain consistency in your tests.

This blog post in particular was interesting because it uses data gathered from real tests used in Google’s continuous integration system to show a cause of flaky tests and how to avoid them. Before reading this blog I didn’t realize how important writing test cases really was, but seeing just how many automated tests that Google used (4.2 million!) which lead to me doing more research on their importance. It also reminded me that the best solution is usually the simplest, in that you should remember to keep your test cases as simple as possible to avoid creating a flaky test and giving you a headache in the future trying to figure out what’s wrong.

Source: https://testing.googleblog.com/2017/04/where-do-our-flaky-tests-come-from.html

From the blog CS@Worcester – Andy Pham by apham1 and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Le Blog Spot 2017-10-16 19:24:48

Week 5

AB Testing – Episode 68 by Brent Jenson and Allen Page.

 

-In this episode, Brent and Allen begin by talking about their stressful lives as software QA engineers. Working late hours and trying to stay up to date with the latest trends in the industry. From this brief intro I was able to get look at the potential challenges that exist in the field of Software testing. One needs to have the ability to be able to learn and implement new technologies regardless of how qualified or advanced one is at software QA and testing. As the podcast continues, allan talks about his position and how much work goes in to generating and creating substantial testing procedures that gets the job done and optimizes the testing process. To me, a student majoring in Computer science with a focus in software development, it’s easy for me to understand and know what Alan is talking about. Due to my class studies, I understand the importance of software testing and how a bad-testing process can be of little to no value to a product whiles a good testing procedure can make or break a product. HE emphasized on all the intangibles that only a person in the industry could understand and enumerated on how useless his positions appears to be to the average person. Brent expands on this topic by saying that organizations generally see the QA department of organizations as a cost for the organization without realizing the true benefits of this department. They are often not given the credit when things are going well and consistent but they are often the first department in software industries that deals with layoffs and budget cuts should there be a need for cuts.

With this topic both Allan and Brent went off on a tangent and began comparing and contrasting Traditional testing and modern testing. Prior to this podcast, I didn’t know there were “traditional testing era” and   “modern testing era”. According to them, the old way of testing followed the following sequence. Requirement => Design =>Code & Build => Testing => Maintenance whiles the modern way of testing follows the following sequence.

Requirement => Testing => Design => Testing => Build & Execution => Testing =>Test => Testing => Installation => Testing => Maintenance => Testing.

As we can see, the modern way implements more testing stage with might appear to be more costly that the old way but overtime saves more and enables modulation of components. The software gets built by parts instead of one giant big project.

LINK 

https://testingpodcast.com/category/ab-testing/

 

From the blog CS@Worcester – Le Blog Spot by houtyr and used with permission of the author. All other rights reserved by the author.

Blog 1

soft qual ass & test

From the blog CS@Worcester – BenLag's Blog by benlagblog and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Not just another CS blog 2017-10-16 16:57:49

Second blog

From the blog CS@Worcester – Not just another CS blog by osworup007 and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Not just another CS blog 2017-10-16 16:57:00

Second blog

From the blog CS@Worcester – Not just another CS blog by osworup007 and used with permission of the author. All other rights reserved by the author.

Software Frameworks

This week i picked software frameworks. Since it’s going to be a next to topic to be discoursed in class in the future. I rather prepare my self, and get more understanding.

A software framework is a concrete or conceptual platform where common code with generic functionality can be selectively specialized or overridden by developers or users. Frameworks take the form of libraries, where a well-defined application program interface (API) is reusable anywhere within the software under development

Here are some types of software frameworks:

  • Resource Description Framework, a set of rules from the World Wide Web Consortium for how to describe any Internet resource such as a Web site and its content.
  • Internet Business Framework, a group of programs that form the technological basis for the mySAP product from SAP, the German company that markets an enterprise resource management line of products
  • Sender Policy Framework, a defined approach and programming for making e-mail more secure
  • Zachman framework, a logical structure intended to provide a comprehensive representation of an information technology enterprise that is independent of the tools and methods used in any particular IT business

Using a framework is not really any different from classic OOP programming.

When you write projects in a similar environment, you will probably see yourself writing a framework (or a set of tools) over and over again.

A framework is really just code reuse – instead of you writing the logic for managing a common task, someone else (or you) has written it already for you to use in your project.

A well designed framework will keep you focused on your task, rather than spending time solving problems that has been solved already.

Frameworks of all kinds are extremely important nowadays, because of the time factor. When building something you will need to invest a lot of your time in building the logic for your application – and you don’t want to be forced to program any kind of low-level functionality. Software frameworks do that, they take care of the low-level stuff for you.

There are some this disadvantages :

  • Creating a framework is difficult and time-consuming (i.e. expensive).
  • The learning curve for a new framework can be steep.
  • Over time, a framework can become increasingly complex.

But these disavantages , i think its the best way to go.

From this topic i learned a framework is a code reuse, extremely important for programmer for it to take care of the low-level stuff. This will also help me develop better codes and in a fast pace .  I really hope this help all the students taking this cs-343 class for better understanding in the future.

 

links or reference :: https://www.techopedia.com/definition/14384/software-framework ,

http://whatis.techtarget.com/definition/framework

From the blog CS@worcester – Site Title by Derek Odame and used with permission of the author. All other rights reserved by the author.

Levels of Testing

Link to blog: https://blog.testlodge.com/levels-of-testing/

Before software is released and used, it has to be tested so that there are no flaws within its specification or function. In this blog by Jake Bartlett, he explains the stages or “levels” of testing that are completed prior to the release and use of software. These levels include Unit Testing, Integration Testing, System Testing, and Acceptance Testing.

Unit Testing: The first of level of testing is unit testing, which is the most micro-level of testing. It involves testing individual pieces of code to make sure each part or unit is correct. A unit is a specific piece of functionality, a program, or a certain procedure within an application. This type of testing verifies the internal design, internal logic, internal paths, and error handling.

Integration Testing: This level of testing comes after unit testing. Integration testing tests how the units work together. Individual units are combined and tested as a group. This overall process ensures that the application runs efficiently by thoroughly dissecting and analyzing how each each unit of code performs with one another. The three techniques to effectively conduct integration testing are Big Bang Testing, Top Down Approach, and Bottom Up Approach.

Big Bang Testing involves testing the entire code along with each group of components simultaneously. The downside to this technique is that since it tests the entire code altogether at one time, it makes it hard to identify the main cause of a problem if there is one.

The Top Down Approach tests the top units of the code and moves down to the lower set of codes in that sequence.

The Bottom Up Approach tests the bottom units first and moves up to the high set of codes in that sequence. Basically, it is the reversal of the Top Down Approach.

System Testing: This type of testing requires the entire application. It is a series of tests in order to test the application end-to-end and verifies the technical, functional, and business requirements of the software. This level is the last level of testing before the user tests the application.

Acceptance Testing:  This is the final level testing which determines whether or not the software is ready to be released and used. Acceptance testing should be done by the business user or end user.

I chose this blog on levels of testing because I wanted to know more about each levels. I had the basic concepts of certain types of testing that were discussed in my software testing class, however these terms such as system testing, and acceptance testing were the ones where I wanted to know more about. Bartlett highlighted the important aspects about each of the four levels of testing, which made me conceptually understand them a lot better. Understanding these levels of testing is important because as a future Video Game Developer, I will have to undergo many types of tests to efficiently test the software that I’d produce before releasing it. It is essential that I my tests allow my applications to run successfully.

From the blog CS@Worcester – Ricky Phan by Ricky Phan CS Worcester and used with permission of the author. All other rights reserved by the author.

SOLID principles

This week I read a blog on SOLID principles. I believe using SOLID principles in the software design process will guide me in the creation of clean and robust code.

There are many design principles out there, but at the basic level, there are five principles which are abbreviated as the SOLID principles.

S = Single Responsibility Principle

O = Opened Closed Principle

L = Liscov Substitution Principle

I = Interface Segregation Principle

D = Dependency Inversion Principle

From the blog CS@Worcester – Not just another CS blog by osworup007 and used with permission of the author. All other rights reserved by the author.

Software Architectural Patterns

Link to blog: https://medium.com/towards-data-science/software-architecture-patterns-98043af8028

In this blog by Anuradha Wickramarachchi, he highlights the different layers of software architecture. These include the Presentation Layer, Business Layer, Persistent layer, and Database Layer. He also describes that each of these layers contain several “components” such as open and closed layers. Each layer is described as follows:

Presentation Layer: The presentation layer presents and displays web pages,  UI forms and end user interacting API’s.

Business Layer: The business layer contain the logic behind the accessibility, security and authentication procedures. These include the Enterprise Service Buses, middle ware, and other request interceptors to perform validations.

Persistent Layer: The persistent layer is the presentation layer for data which includes the Data Access Object presentation (DAO), Object Relational Mappings (ORM), and other modes of data presentation in the application level. All of these types of data presentation reveals persistent data within the RAM.

Database Layer: The database layer provides simple databases expanding up to Storage Area Networks (SANs).

Components of these layers contain open and closed layers.  According to Wickramarachchi, open layers allow the systems to bypass layers and hit a layer below. This is done in critical systems where latency can cost a lot. At times, it is reasonable to bypass layers and directly seek data from the right layer. Within the closed layers, they reveal the concept of Layers of Isolation which separates each layer in a strict manner. This allows only a sequential pass through of layers without a bypassing procedure. Layers of Isolation enforces better decoupling of layers which makes the system more viable to changes.

I chose this blog because I wanted to know more about about Software Architectures and its layers. I knew briefly that within software architectures, they’d contain multiple layers that performed a number of tasks and jobs, and each layer differed from each other. One new thing that I learned from reading this blog was the Layers of Isolation. It was my first time seeing that terminology. I thought that it was interesting that the four layers of software architecture would contain other “components” in which Wickramarachchi explains as well as Opened and Closed layers.

I felt that Wickramarachchi was well explained and was very brief into the concepts I wanted to understand. He highlighted the main aspects of each layer without going overboard on extra content which helped understand the concepts further. Since I didn’t have a previous well understanding on software architectures, this blog clarified the fundamentals of software architectures that I wanted to understand.

 

 

 

From the blog CS@Worcester – Ricky Phan by Ricky Phan CS Worcester and used with permission of the author. All other rights reserved by the author.