Category Archives: Week 8

Some benefits of Angular and Typescript!

Over the past few months, I’ve been asked the same general question about Angular multiple times in onsite training classes, while helping customers with their architecture, or when talking with company leaders about the direction web technologies are heading. After hearing that general question over and over I decided it was time to put together a post … Continue reading Some benefits of Angular and Typescript!

From the blog cs-wsu – Kristi Pina's Blog by kpina23 and used with permission of the author. All other rights reserved by the author.

Agile Testing

Agile Testing

For this week’s blog post I will be discussing Agile Testing, describing what it is, its principles and more. First of Agile Testing is a software testing process that follows agile software development-based principles. Essentially agile testing is a continuous process rather than being sequential. The testing begins at the start of a project becoming integrated into the testing and development of the entire project. Now there is a testing like this called waterfall testing where waterfall testing is a bit more structured and detailed while agile testing is more minimal. From the blog I list down below there is a large compare between the two that I will save for the viewer to visit as it covered in fall better detail there than what I could write here.

Main Principles of Agile Testing

The main principles that come out of agile testing are as follows. Testing is continuous ensuring the continuous progress of a project with Continuous feedback that provides an ongoing basis for what your project’s requirements are going to need. Tests are performed by the whole team, the developers and the business analysts of a project also test the application instead of just the test team. The decrease in time of feedback response, this is essentially due to the continuous testing allowing a better understanding of what is happening allowing for a better response rate. Simplified and clean code, Less documentation, and test driven  are all key principles that arise from the continuous testing as listed above several times, allowing for all of this to be done in a much cleaner fashion.

Advantages of Agile Testing

The benefits from agile testing are simple as it all comes from the continuous model it follows. First and for most it saves time and money, due the testing taking place right from the beginning and not at the end. Less documentation is needed, along with it being very flexible and adaptable to changes throughout its progress. Regular feedback is provided once again due to the continuous model.

In conclusion the agile testing not only facilitates early detection of bugs/defects but reduces time spent on fixing them. This model of testing can yield a much better-quality product/project due to its constant testing processes. The article written here is very informative about agile testing, I just only wish that some of the ideas were fleshed out a little more with perhaps a few examples showing how it works in reality.

 

 

 

https://reqtest.com/testing-blog/agile-testing-principles-methods-advantages/

From the blog CS@Worcester – Matt's Blog by mattyd99 and used with permission of the author. All other rights reserved by the author.

Decorator Design Pattern

For this week’s blog post I will be discussing the decorator design pattern discussed in Derek Banas’ Design Pattern Video Tutorial’s found on YouTube. Here you can find pretty much any design pattern you are interested in where he will discuss it in a video usually under 15 minutes.

You use this design pattern when you want the capabilities of inheritance with subclasses but know you need to add functionally at run time. You can modify an object dynamically because of this. Decorator Design is more flexible than inheritance. Simplifies code because you add the functionality using many simple classes, causing you to be allowed to extend with new code. The example he uses explaining the pattern is a great one. You have a pizza and you want to be able to put multiple toppings on top and such. He shows how messy it can be with simple subclasses and an inheritance-based system. Then he shows how to do it in the design pattern showing how useful it can be in situations like this. Essentially you make a pizza interface, with a concrete class being a plain pizza where you can modify its toppings. Then you have a Topping Decorator abstract class where the bases for the toppings will go, followed by a topping class for each topping you’d like to have. Next, he runs through what this would like in code. He writes the code out improving upon the previous inheritance-based code he had down before. What he writes is a much cleaner and simpler version of what he set out to do which is made possible by the decorator design pattern.

This entire YouTube channel, specifically this playlist of his is perfect for learning any of the design patterns we may or may not discuss in class or you want to learn on your own. Like I said above he explains everything in detail in a good pace where almost anyone could understand what is happening in the video. Along with the examples he shows and writes in real time, I recommend this channel/playlist to anyone who is interested in learning design patterns.

From the blog CS@Worcester – Matt's Blog by mattyd99 and used with permission of the author. All other rights reserved by the author.

Test Automation, are you doing it right?

Source: https://www.softwaretestinghelp.com/automation-testing-tutorial-1/

In this week’s reading was about a test automation tutorial. It defined test automation as a technique to test and compare the actual outcome with expected outcome. Mainly used for automating repetitive tasks which are difficult to perform manually. Test automation allows testers to achieve consistent accuracy and steps towards testing. This in return would reduce overall time towards testing the same thing over and over. As the tests should not be obsolete, it would allow new tests to be added on top of the current scripts when a product evolves. They also suggest that these tests should be planned so that maintenance will be minimal, otherwise time will be wasted when fixing automation scripts. The benefits are huge but there will be challenges, risks, and other obstacles. Such as knowing when not to automate and turn to manual testing which would allow a more analytical approach towards certain situations. Which is directly related to the perception that if no bugs are introduced if automation scripts are running smoothly. It is concluded that test automation is only right for certain types of tests.

I found this tutorial to be incredibly helpful as it provided real-life situations as examples for many of the topics covered. It is effective at making the user see the reality behind test automation, through the five W’s – who, what, when, where, and why – even not stated explicitly. I can conclude that I took test automation for granted as I assumed that all tests would be automated regardless. That way of thinking is a wrong step for a tester to make, as not all bugs can be discovered through pre-defined tests in static test cases. Manual testing is necessary to be able to nudge bugs to appear through manual intervention as it pushes the limits of the product. Overall, the main take away for myself would be the planning phase of test automation. By splitting different tests into different groups, we can easily set a path for testing in an ordered way. For example, it would be best to do tests for basic functionality then integration before testing certain features and functionalities. It would logically be more difficult to solve complex bugs before smaller bugs. It goes to show that test automation is not as easy as it looks.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

On REST API’s

This tutorial What is REST API Design? on mulesoft.com is a beginner friendly description of how REST API’s operate. The first section of the tutorial focuses on defining RESTful API’s and how they are typically used. However, the majority of the post describes the five key criteria that turns a regular API into a RESTful one.

What makes Representational State Transfer (REST) API’s different than other API’s is adherence to predetermined protocols. Like how IP and TCP are a set of guidelines that allow information to be shared over the internet, REST API’s have a set of five criteria that every developer designing this system must follow.

The first criteria is the API must be a client-server architecture. This rule says that the client end and the server end of the system should be separate and have no dependencies. Any changes to the database or the program calling the API should not effect the other part of the system.

Second, the API must be stateless, which means that all calls on the API should contain all the data the server needs to fulfill the request. The client end does not need to store data in the server itself.

Third, when the REST API’s respond to requests, data that is sent to the client should be stored in cache memory, which is faster and reduces stress on the server end.

Fourth, there should be a uniform interface handling communication between the client and server end. Usually, this is done by using HTTP and URL’s. This is a crucial function of the REST API, as there needs to be a common lifeline between the requests and responses.

Finally, the API should be a layered system. This constraint ensures that the program will be divided into layers, each focusing on a particular responsibility and handling communication to the next level above/below. This gives a certain level of security and flexibility between layers of the system.

This tutorial definitely helped my understanding of REST API’s, as the author went into some detail describing each of them and what role each constraint plays in achieving the goal of RESTful API’s.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.

Mark Richards on the Evolution of Software Architecture

For this week’s blog on Software Architecture, I listened to Episode 3 of the “Software Architecture Radio” Podcast, which featured Mark Richards, an independent software architect. He has 32 years in the industry, with more than twenty years as a software architect. 
They mostly talked about the evolution of software architecture. Although some of the things they talked about went a little over my head, I was able to pick up on the majority of what they were talking about. 
He divided up the evolution of architecture into five stages. He talked about evolution happening in vertical and horizontal slices, that is within each layer and one layer affecting those above and around it. The layers were (1) hardware, (2) software, (3) human interaction, (4) social interaction, and (5) the environment, such as the internet of things.
He said one thing in particular, need, drives change the fastest. As an aside, he also said that that’s the best way of teaching something in this field, by establishing why a need exists and then later explaining the concept.
There are three things covered that influence architecture. First, agility, which is the speed and coordination to change. Most companies try to embrace this, but many businesses fail at it. Second, velocity, which is the speed of something in a given direction. Third, modularity, which is when independent parts can be combined in lots of ways. They went over the upsides and downsides to each, and you have to compromise many times at one aspect for another
I thought one of the most interesting parts of the podcast was when he said that if you wanted to see what was coming next in technology, read academic papers from twenty years ago. It takes that long for one level of the level of architecture, hardware, to catch up to another, software. It is only recently that we can implement this technology.
Another interesting thing he said was that one of our biggest inhibiters to evolution is the integration of data and functionality. He foresaw a paradigm shift in how we handled this.
As a parting message, he was asked, “what would one piece of advice you would give to an aspiring software architect,” and his answer surprised me. He said to work on people skills, and that was the “hands down” most important advice he could give. This skill is so key in everything you do, including lead and mentor. I found this incredibly interesting, because I am often reminded about how we never do everything in a bubble, and it is extremely important to be able to “play well with others.”

From the blog Sam Bryan by and used with permission of the author. All other rights reserved by the author.

Test Automation and Continuous Testing

Blogger Kyle McMeekin writing for QAsymphony.com in his post “Test Automation vs. Automation Testing” explains the definition and distinction between automated testing and test automation, and also goes into their roles in continuous testing and why this type of testing is important  to understand.

McMeekin begins by defining automation as using technology to complete a task. When applied to the field of software quality assurance, there are two different types of automation: automated testing and test automation. While the terms sound interchangeable, the author differentiates them by explaining how the scope of each is different.

Automated testing is concerned with the automation of executing particular test cases, while test automation is concerned with automating the management of the test cases as an aggregate. While automated testing is actually carrying out the tests we are interested in, testing automation manages the pipeline of all of the automated tests. So the scope of automated testing is more local compared to the more global scope of test automation.

After differentiating between these two types of automation, McMeekin describes how they apply to continuous testing, and explains the importance of this strategy in today’s economic climate. While most software testing is done after development is finished, today more tech companies are using models where the software is constantly in development constantly updated even after it becomes released. In this case, testing must be conducted as soon as something is changed. This technique is known as continuous testing.

However, keeping track of every test suite constantly is a huge task itself. This is where test automation comes in. If we are able to automate the process of managing the processes of this ever-growing list of test suites, a massive amount of work is saved on the tester’s part, freeing up time to create effective test cases.

Automation is at the heart of computer science, as saving work by taking advantage of computer’s abilities to handle processes is integral to being a good developer. So learning how to apply automation in the context of software testing is definitely advantageous. Especially since it is so common nowadays for programs to be constantly added to after release, the amount of tests to keep track of increases steadily. By taking advantage of test automation and keeping track of all the testing processes, we don’t need to worry about the timing of the tests, we can spend more time testing and analyzing the behavior of software.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.

Fakes, Mocks, and Stubs in Unit Testing

In the blog post Test Doubles — Fakes, Mocks and Stubs., Michał Lipski describes and implements three different types of test doubles: fake, stub, and mock, and the situations in which you would need to use them in your unit testing. He also provides useful examples for each of the three terms.

Here’s a useful image found in his blog that helps to understand the differences between the three:

Fake – object that has a working implementation, but not the same implementation as the ones in production. Usually they take some shortcuts are are simplified versions of the production code. This is useful because you’re able to do integration test of services without starting up a database or performing time consuming requests.

Stub – object that holds predefined data and uses it to answer calls during tests. This is useful when we can’t or don’t want to involve objects that would answer with real data or have undesirable side effects.

Mock – object that registers calls as they are received. This is useful when we don’t want to invoke production code or when there is no easy way to verify that the intended code was executed.

The reason I chose this blog post in particular is because the first time I was exposed to stub functions was earlier in my internship while working on unit tests, and at the time I didn’t have a great understanding of what their purpose was. I was recently reminded of them last week during lecture, so I decided that it’d be a useful topic to read about. At the time I wasn’t aware that there were different types of test doubles, so I found that the examples found in Lipski’s blog are really helpful in understanding the key differences between them, as they share many similarities and are likely to get mixed up. According to Lipski, misunderstanding and mixing test doubles implementations may influence test design and increase fragility of tests, standing in our way to seamless refactoring. So I think that this blog may be a useful resource if you were trying to write clean unit tests.

Source: Test Doubles — Fakes, Mocks and Stubs.

From the blog CS@Worcester – Andy Pham by apham1 and used with permission of the author. All other rights reserved by the author.

Defending against Spectre and Meltdown attacks

http://news.mit.edu/2018/mit-csail-dawg-better-security-against-spectre-meltdown-attacks-1018

In January the security vulnerabilities Meltdown and Spectre were discovered. These vulnerabilities were born not from the usually way of software or physical CPU problems but from the architecture of the CPU itself. This means that large amounts of people, buisnessess and more were vulnerable.  With this new method of defense it is much harder for hackers to get away with such attacks. This method of defense may also have an immediate impact on fields like medicine and finance who limit their use of cloud computing due to security concerns. With Meltdown and Spectre, the attackers took advantage of the fact that operations can take different times to compute. For example, someone trying to brute force a password will look at how long it takes for a wrong password to compute and then compare it to another entry and see if it takes longer. If it does then something in the entry that took longer will have a correct number or letter. The normal defense to this attack is Cache Allocation Technology (CAT), which splits up memory so that it is not stored all in one area. Unfortunately this method is still quite insecure because things are still visible to all partitions. This new approach is a form of secure way partitioning called Dynamically Allocated Way Guard (DAWG). Since it is dynamic it can split the cache and then change the size of those different pieces over time. DAWG is able to fully isolate one program from another through the cache and still has comparable performace to CAT. It is able to establish clear boundaries for programs so that when sharing should not happen it does not, this is helpful for programs with sensitive information.

The article mentions that these microarchitectural attaks are becoming more common because other methods of attack have become more difficult. I thought that was interesting because it seems like a relatively new method and a new security risk that has not had time to receive development for security. This is an issue that can effect anyone and is a serious problem. On top of that, performance is a big concern with this security since is deals directly with the CPU and its architecture which is not an easy fix. The article also points out that because of these attacks, more information sharing between applications is not always a good thing. I find this pretty interesting since a large number of different applications made by the same company now have information sharing capabilities such as the microsoft umbrella of software. Sharing information between things can actually put you at more of a risk than it is worth saving time by sharing things.

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.

Test Scenario vs Test Case

https://reqtest.com/testing-blog/test-scenario-test-case/

This blog post compares two important aspects of software testing: test scenarios and test cases. A test scenario is a high-level documentation of a use case. It is used to make sure that the end-to-end functioning of the software is working correctly. With this type of testing, clients, stakeholders, and developers help the testers create scenarios that ensure the software works as intended. Test scenarios look at software from the point of view of the user to determine real world scenarios and use cases. Some important reasons to use test scenarios are:

  • They help validate that software is working correctly for each use case
  • They help determine the real-world use of the software
  • They help find discrepancies and improve the user experience
  • They save time, money, and effort
  • They are vital in evaluating the end-to-end functionality of the software
  • They help build better test cases because the test cases are derived from the scenarios

A test case is a set of conditions that help determine whether the software being tested satisfies requirements and works correctly. It is a single executable test that contains step-by-step instructions to verify that software functions the way it’s supposed to. A test case is used to validate a test scenario. Normally, a test scenario contains multiple test cases which contain information on how to test the scenario. This information includes prerequisites, inputs, preconditions, expected results, and post-conditions. Test scenarios are extracted from user stories and test cases are extracted from scenarios.

Both test scenarios and test cases should be used to ensure a high test coverage. As agile practices become more common, test scenarios are being used more and more.

I thought that the content of this blog was interesting and useful. I learned the difference between test scenarios and test cases and why both of them are used. Since agile development environments are becoming so common it is very useful to understand what test scenarios are. It was interesting to learn how test scenarios and test cases are related because I had no idea what differentiated them before I read this post. Overall this was an informative article that I enjoyed reading.

From the blog CS@Worcester – Computer Science Blog by rydercsblog and used with permission of the author. All other rights reserved by the author.