Category Archives: CS-443

Data Flow Testing

http://www.ques10.com/p/2192/what-is-data-flow-testing-explain-feasible-paths-1/

Finally, for this week I decided to write about Data Flow Testing. I found a blog site called Ques10 and there was a question about what is data flow testing. The blogger Ramnarth ansered the question by explaining what Data Flow Testing is. Data Flow Testing is a test that tells us that a programmer can perform a number of test on data values which are collectively known as data flow testing. With data flow testing it can be performed at two levels static and dynamic. Static testing is performed by analyzing the source code and doesn’t involve in running the source code. It is performed to reveal potential problems in the programs which are also known as a data flow anomaly.  And with dynamic testing it looks at the path of the program from the source code and with this you would draw a data flow graph of the program select a critical testing area, then identify the paths in that data flow and test the derive inputs. He talks about how testing critical section on the data flow chart by breaking it down into paths and tests such as how we did it in class. Were we would look at the DU-Paths and look at all the edges and nodes to see if they make sense. I like this blog because it does involve some of the things that we did in class such as DU-Path testing, edges, and node testing. I like that the things I learned in class is also reflected into real life things we can do. I think data flow testing is just something we could be doing because it is just a bunch of tests that you run through so that way you do multiple test at the same time.

 

 

From the blog CS@Worcester – The Road of CS by Henry_Tang_blog and used with permission of the author. All other rights reserved by the author.

Best Software Engineering Practices

As my final semester approaches here at Worcester State University, I figured a relatable blog topic would be the best practices you can learn as a Software Engineer. The article can be found here: http://www.excella.com/insights/best-software-engineering-practices

 

This article touches upon three different practices that the author thought highly enough to include. The three practices are S.O.L.I.D., Automated Unit Testing, and continuous integration.

S.O.L.I.D. –  an acronym for an object oriented design principle.

S –  Single Responsibility –  a class should have only a single responsibility.

O – Open/Closed Principle – Software should be open for extension, but closed for modification.

L- Liskov substitution principle – objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program.

I- Interface Segregation Principle – Many client-specific interfaces are better than one general-purpose interface.

D- Dependency inversion principle – One should depend upon abstractions, not concretions.

When you use all of these principle together, a developer can create code that is much easier to maintain and improve over time. The code is SOLID.

 

Automated Unit Testing

Automated unit testing is a software development and testing approach where you independently test units to ensure that they are operating correctly.  Unit testing can be done manually and it was in the past, but automation has taken over and everyone is thankful for that.

I use JUnit in eclipse to test java programs continuously. It makes it very easy to track where there is an error and what caused it to happen.

Developers become much more confident in their work when they don’t have to worry about wasting a bunch of time finding errors, instead we test immediately and fix the problem before it gets too clustered.

 

Continuous Integration

Continuous Integration quite literally means that you continuously ingrate the code and fixing issues before they are submitted to the actual project repository. In my classes we used Github or GitLab to manage repositories.

It works by a developer checking new code submissions in the repository. The integration process then builds and runs tests while analyzing the code. The CI detects any problems with the code and gives feedback to the developer. The developer fixes them and then approves the changes to the code. This way no code ever gets broken.

One of the benefits about using Continuous integration is that the version control system holds all current and changed code. You can easily go back to see what changed and how something may have broke.

 

From the blog CS@Worcester – Rookey Mistake by Shane Rookey and used with permission of the author. All other rights reserved by the author.

Software Technical Review

In class we did a group activity which had us work together in teams of five and conduct a software technical review. In a software technical review you have a specific role in which you must fulfill specific duties. There are four roles. The producer, review leader, recorder, and reviewers.

Producer–  The producer is the person who created the work that is being reviewed.

Review Leader–  The review leader schedules the review meetings, prepares materials for meetings, conducts meetings, and writes the review report.

Recorder- The recorder’s job is to take notes of what is being said. They also document anomalies, decisions and recommendations.

Reviewer– The reviewer(s) job is to prepare an individual reviewer issue sheet that is given to the review leader before the meeting. The sheet contains all of the issues that the reviewer found with the code.

There are three different types of software technical reviews. The walk through, technical inspection and an audit.

Walkthrough- A walkthrough is an informal meeting with the producer and the colleagues. There is little preparation and little documentation.

Technical Inspection- A technical inspection is a formal process and includes training.  There is sufficient and budgeted preparation time and the team ic very carefully selected.

Audit– An audit is a review that is held by an external group. The purpose of audits are to ensure that you are conforming to standards.

Why would we waste our time with such a complicated process when we could just look for faults individually? Well, there are many good reasons why we hold reviews and why the process is so important.

Reviews push developers to communicate with one another, it gives an opportunity to train new employees, it helps management report progress in the business, you find defects, it builds team morale and it gives the customer reassurance that the product comes out the way it should.

Going into your first review is probably nerve wracking. If you can remember the proper review etiquette, you should be golden!

Be prepared– There is nothing worse than an unprepared team member

Be respectful- it is the golden rule after all. Review the product not the producer.

Avoid discussions of style- Not everyone likes the same thing you do, as long as it is not wrong leave it be.

Provide minor comments to producer at the end of meeting 

Be Constructive- help others, don’t bring them down.

Remain focused- identify issues and don’t try to solve them yet.

Participate- Do not try to get the spotlight, it can be annoying.

Be open- the results of the review should be available to the entire organization.

 

My source of information was our class slides, but you can learn more about software technical reviews here:

http://www.softwaretestinggenius.com/understanding-software-technical-reviews-strs

From the blog CS@Worcester – Rookey Mistake by Shane Rookey and used with permission of the author. All other rights reserved by the author.

Ministry of Testing Podcast

The Philosopher and the tester

In this episode of the ministry of testing, Israel Alvarez talks about his transition from a philosopher background to becoming a QA tester. He believes there is a lot of positives from his philosophy background that has helped his career as a software context driven tester. Philosopher raises topics and concepts that forces you to think and so does testing so it was fairly relatable and easy to apply his acquired skills from philosophy. Being able to critically think is key to becoming a great QA tester. Knowing what to test for and how to test for makes this arguably one of the hardest things to do. Often as a tester, you have to analyze your own thinking, many times risking the analysis paralysis syndrome. As a math and philosophy major, Israel was always faced with problems that often didn’t have plan cut simple solution so he always had to try and apply what he had learned to get the job done with thinking outside the box. That’s what makes testing so hard. Its easy to come up with some things to test in a code or program but finding out things that need to be tested requires a thorough understanding of the product or software that is being developed. You often need to understand it even more than the creators of the product. They only way you can adequately test a product is to find its boundaries, applications and purposes and see what you can do to challenge these thing or break them. In a start up for instance, there are deadlines, scope changes and many challenges’ that testers have to endure. Proving your value as a tester is very important in the early stages of testing. Learning to articulate and defend ones view, as a tester is an essential skill that every tester needs to have to grow. This is a major challenge because developers often have to prove their point and findings to the programmers. Programmers often have strong views and passion for their work and in order to properly nit pick and criticize defect or bug that is in their work, you need to be able to establish yourself and emphasize overall product quality in your defending’s. As a tester, developers often have stressed feelings towards your work. It’s just the nature of the job as a QA manager or tester. It’s your job to ensure that the developers have put out the best possible product or software they can produce.

 

Source

https://soundcloud.com/ministryoftesting/the-philosopher-and-the-tester-israel-alvarez

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.

Episode 33 —Testing in Data Science

In this week’s testing podcast episode, Brian explores testing in data science with the famous Katharine Jarmul. Katharine is a expert in data science and machine learning. She mainly uses python to write unite tests for her projects. I picked this podcast because after listening to this, I learned more about how to put together testing teams, how to manage and direct traffic in a testing team and how to be the driving force for success in the team. According to her, no matter how much we know as a team, with each testing project, we need to bring together all our resources and ideas. Testing often goes out of the scope of what is considered the norm. This is because in testing, we normally try to find the boundaries and limits if products and software. As a teacher and owner of a consulting company, Katharine often spends her days developing testing strategies that requires the implementation of new and modern testing technologies such as Integrating QA through agility and TCoE , Higher Automation Levels with a focus on security and Context driven testing.

 

Integrating QA through agility and TCoE

Though agile development teams have been around for a long time, agility in testing is still nascent. With the continuous pressure to quickly deliver software, businesses are investing time and money into setting up a TCoE with the objective of reducing CoQ, increasing test effectiveness and generating more ROI out of testing. From 2011 to 2014, the number of operational TCoEs has increased from 4% to 19% and is expected to increase further in the future.

 

Higher Automation Levels with a focus on security

System robustness and security has always been a top priority but with growth in social media and mobility and need for software that can be integrated to multiple platforms, systems are becoming more vulnerable. There is a pressing need to ensure enhanced security, particularly in applications handling sensitive data. This is causing QA to focus more on security testing.

 

Context driven testing

The challenge for businesses to maintain central hubs of hardware, middleware and test environments necessary to comprehensively test them has caused context driven testing to become more popular as it ensures more testing coverage from diverse angles. It is expected that this will impact skill development among testers, as there will be more demand for testers with exposure to different contexts.

 

Sources

https://testingpodcast.com/33-katharine-jarmul-testing-in-data-science/

http://www.cigniti.com/blog/top-7-trends-in-software-testing/

 

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-12-11 23:14:26

The article this blog post is written about can be found here.

This article gives an overview of web application testing, which has several aspects unique to it compared with other software testing. I chose the article from a few sources on web testing because it was easily understandable while providing enough detail to get a feel for what’s involved.

One of the forms of testing highlighted in this article is usability testing. This includes verifying that there’s a consistent look and feel throughout the site, the application is easy to navigate, and it’s clear to users what options they have available to them. The article also makes note of the 1998 amendment to section 508 of the Rehabilitation Act, which outlines accessibility requirements for people with disabilities on information technology systems belonging to the US federal government. Section 508 compliance isn’t necessary for any non-federal website, but making sure to include accessibility features opens your web application up to a wider audience. The article gives the example of what your application should do if a user fails to enter a required field: simply changing the field title’s color to something noticeable like red, as is commonplace, wouldn’t be useful for someone who has trouble distinguishing colors. Another visual cue like an asterisk would be useful in this situation.

HTML verification is another form of testing for web applications. Testing for correct syntax is the obvious form, but it also includes testing the way your application displays across different internet browsers, OSes, screen resolutions, and device types. Your application may be usable and look great in one context, but break in another.

Load testing must also be done on applications that are intended to be accessed through the internet. They have to be able to function during times of high traffic, and testing of this sort can be used to find bottlenecks. In addition, performance tuning is prudent. All pages of your application should load quickly – the article suggests within 15 seconds.

User acceptance testing is used to determine whether your application does what it set out to do and makes something easier for the user instead of harder. One way this can be done is with a beta release.

Finally, of extreme importance in web applications is security testing, which should be done by qualified security specialists. The damage that can be done if this is neglected is immense.

This article gave me a good introduction to the additional testing required for web applications. Of all the details, I found adherence to section 508 especially interesting. It might not be a legal requirement for anything I design in the future, but if I ever do design a web application destined for the real internet, I will definitely want to make it accessible.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

Thoughts on “Hybrid Verification: Mixing Formal Methods and Testing”

This article, by Dr. Ben Brosgol, focuses on a mixture of formal methods and testing practices (together called “hybrid verification”) and the use of “contracts” that consist of preconditions and postconditions in order to formalize the assumptions made by critical code.

I chose to write about this article because it highlights some of the limits of testing, shows how to provide additional security for critical code, and introduces contact-based programming.

Brosgol defines a “contract” in programming as a set of preconditions and postconditions that wrap a subprogram (function, method, etc.) call.  The subprogram cannot begin unless its preconditions are met, and cannot return until its posconditions are true.  This provides a contract between a program and its subprograms that guarantees a certain state at critical times.  There are tools written for some languages (he uses SPARK as an example) that can do both static and dynamic contract testing and provide proof that the code will work as specified.

Brosgol then details ways to mix testing and formal verification.  If formally verified code calls subprograms that were tested rather than proven, the formal analysis tools will attempt to show that the preconditions are met, and assume that the tested code satisfies the postconditions.  This also requires that either the contracts are checked at runtime, or that sufficient testing was done such that the developer is confident the contracts will be fulfilled by the tested code.  If formally verified code is called from tested code, there need to be runtime checks for the preconditions (because the tested code does not guarantee those in the way the formal verification requires), but because the postconditions have been proven there is no need for checks at that point.

Next, Brosgol mentions the need for good choice of postconditions.  Strong, extensive postconditions make it easier to provide proof, but may have unacceptable overhead if they need to be checked dynamically.

He concludes that the relatively new combination of formal proof tools and contract verification on both a static and a dynamic basis opens up new avenues to create code that secures its critical sections.

This article helped me to understand that there’s a much wider world of testing beyond what we covered in class.  We didn’t talk about proof-based testing at all, and that’s a subject area that I believe I should learn more about.  It also highlights the way that our understanding of testing is ever-expanding.

 

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.