Category Archives: Week 8

Decorator Design Pattern

For this week’s blog post I will be discussing the decorator design pattern discussed in Derek Banas’ Design Pattern Video Tutorial’s found on YouTube. Here you can find pretty much any design pattern you are interested in where he will discuss it in a video usually under 15 minutes.

You use this design pattern when you want the capabilities of inheritance with subclasses but know you need to add functionally at run time. You can modify an object dynamically because of this. Decorator Design is more flexible than inheritance. Simplifies code because you add the functionality using many simple classes, causing you to be allowed to extend with new code. The example he uses explaining the pattern is a great one. You have a pizza and you want to be able to put multiple toppings on top and such. He shows how messy it can be with simple subclasses and an inheritance-based system. Then he shows how to do it in the design pattern showing how useful it can be in situations like this. Essentially you make a pizza interface, with a concrete class being a plain pizza where you can modify its toppings. Then you have a Topping Decorator abstract class where the bases for the toppings will go, followed by a topping class for each topping you’d like to have. Next, he runs through what this would like in code. He writes the code out improving upon the previous inheritance-based code he had down before. What he writes is a much cleaner and simpler version of what he set out to do which is made possible by the decorator design pattern.

This entire YouTube channel, specifically this playlist of his is perfect for learning any of the design patterns we may or may not discuss in class or you want to learn on your own. Like I said above he explains everything in detail in a good pace where almost anyone could understand what is happening in the video. Along with the examples he shows and writes in real time, I recommend this channel/playlist to anyone who is interested in learning design patterns.

From the blog CS@Worcester – Matt's Blog by mattyd99 and used with permission of the author. All other rights reserved by the author.

Test Automation, are you doing it right?

Source: https://www.softwaretestinghelp.com/automation-testing-tutorial-1/

In this week’s reading was about a test automation tutorial. It defined test automation as a technique to test and compare the actual outcome with expected outcome. Mainly used for automating repetitive tasks which are difficult to perform manually. Test automation allows testers to achieve consistent accuracy and steps towards testing. This in return would reduce overall time towards testing the same thing over and over. As the tests should not be obsolete, it would allow new tests to be added on top of the current scripts when a product evolves. They also suggest that these tests should be planned so that maintenance will be minimal, otherwise time will be wasted when fixing automation scripts. The benefits are huge but there will be challenges, risks, and other obstacles. Such as knowing when not to automate and turn to manual testing which would allow a more analytical approach towards certain situations. Which is directly related to the perception that if no bugs are introduced if automation scripts are running smoothly. It is concluded that test automation is only right for certain types of tests.

I found this tutorial to be incredibly helpful as it provided real-life situations as examples for many of the topics covered. It is effective at making the user see the reality behind test automation, through the five W’s – who, what, when, where, and why – even not stated explicitly. I can conclude that I took test automation for granted as I assumed that all tests would be automated regardless. That way of thinking is a wrong step for a tester to make, as not all bugs can be discovered through pre-defined tests in static test cases. Manual testing is necessary to be able to nudge bugs to appear through manual intervention as it pushes the limits of the product. Overall, the main take away for myself would be the planning phase of test automation. By splitting different tests into different groups, we can easily set a path for testing in an ordered way. For example, it would be best to do tests for basic functionality then integration before testing certain features and functionalities. It would logically be more difficult to solve complex bugs before smaller bugs. It goes to show that test automation is not as easy as it looks.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

On REST API’s

This tutorial What is REST API Design? on mulesoft.com is a beginner friendly description of how REST API’s operate. The first section of the tutorial focuses on defining RESTful API’s and how they are typically used. However, the majority of the post describes the five key criteria that turns a regular API into a RESTful one.

What makes Representational State Transfer (REST) API’s different than other API’s is adherence to predetermined protocols. Like how IP and TCP are a set of guidelines that allow information to be shared over the internet, REST API’s have a set of five criteria that every developer designing this system must follow.

The first criteria is the API must be a client-server architecture. This rule says that the client end and the server end of the system should be separate and have no dependencies. Any changes to the database or the program calling the API should not effect the other part of the system.

Second, the API must be stateless, which means that all calls on the API should contain all the data the server needs to fulfill the request. The client end does not need to store data in the server itself.

Third, when the REST API’s respond to requests, data that is sent to the client should be stored in cache memory, which is faster and reduces stress on the server end.

Fourth, there should be a uniform interface handling communication between the client and server end. Usually, this is done by using HTTP and URL’s. This is a crucial function of the REST API, as there needs to be a common lifeline between the requests and responses.

Finally, the API should be a layered system. This constraint ensures that the program will be divided into layers, each focusing on a particular responsibility and handling communication to the next level above/below. This gives a certain level of security and flexibility between layers of the system.

This tutorial definitely helped my understanding of REST API’s, as the author went into some detail describing each of them and what role each constraint plays in achieving the goal of RESTful API’s.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.

Mark Richards on the Evolution of Software Architecture

For this week’s blog on Software Architecture, I listened to Episode 3 of the “Software Architecture Radio” Podcast, which featured Mark Richards, an independent software architect. He has 32 years in the industry, with more than twenty years as a software architect. 
They mostly talked about the evolution of software architecture. Although some of the things they talked about went a little over my head, I was able to pick up on the majority of what they were talking about. 
He divided up the evolution of architecture into five stages. He talked about evolution happening in vertical and horizontal slices, that is within each layer and one layer affecting those above and around it. The layers were (1) hardware, (2) software, (3) human interaction, (4) social interaction, and (5) the environment, such as the internet of things.
He said one thing in particular, need, drives change the fastest. As an aside, he also said that that’s the best way of teaching something in this field, by establishing why a need exists and then later explaining the concept.
There are three things covered that influence architecture. First, agility, which is the speed and coordination to change. Most companies try to embrace this, but many businesses fail at it. Second, velocity, which is the speed of something in a given direction. Third, modularity, which is when independent parts can be combined in lots of ways. They went over the upsides and downsides to each, and you have to compromise many times at one aspect for another
I thought one of the most interesting parts of the podcast was when he said that if you wanted to see what was coming next in technology, read academic papers from twenty years ago. It takes that long for one level of the level of architecture, hardware, to catch up to another, software. It is only recently that we can implement this technology.
Another interesting thing he said was that one of our biggest inhibiters to evolution is the integration of data and functionality. He foresaw a paradigm shift in how we handled this.
As a parting message, he was asked, “what would one piece of advice you would give to an aspiring software architect,” and his answer surprised me. He said to work on people skills, and that was the “hands down” most important advice he could give. This skill is so key in everything you do, including lead and mentor. I found this incredibly interesting, because I am often reminded about how we never do everything in a bubble, and it is extremely important to be able to “play well with others.”

From the blog Sam Bryan by and used with permission of the author. All other rights reserved by the author.

Test Automation and Continuous Testing

Blogger Kyle McMeekin writing for QAsymphony.com in his post “Test Automation vs. Automation Testing” explains the definition and distinction between automated testing and test automation, and also goes into their roles in continuous testing and why this type of testing is important  to understand.

McMeekin begins by defining automation as using technology to complete a task. When applied to the field of software quality assurance, there are two different types of automation: automated testing and test automation. While the terms sound interchangeable, the author differentiates them by explaining how the scope of each is different.

Automated testing is concerned with the automation of executing particular test cases, while test automation is concerned with automating the management of the test cases as an aggregate. While automated testing is actually carrying out the tests we are interested in, testing automation manages the pipeline of all of the automated tests. So the scope of automated testing is more local compared to the more global scope of test automation.

After differentiating between these two types of automation, McMeekin describes how they apply to continuous testing, and explains the importance of this strategy in today’s economic climate. While most software testing is done after development is finished, today more tech companies are using models where the software is constantly in development constantly updated even after it becomes released. In this case, testing must be conducted as soon as something is changed. This technique is known as continuous testing.

However, keeping track of every test suite constantly is a huge task itself. This is where test automation comes in. If we are able to automate the process of managing the processes of this ever-growing list of test suites, a massive amount of work is saved on the tester’s part, freeing up time to create effective test cases.

Automation is at the heart of computer science, as saving work by taking advantage of computer’s abilities to handle processes is integral to being a good developer. So learning how to apply automation in the context of software testing is definitely advantageous. Especially since it is so common nowadays for programs to be constantly added to after release, the amount of tests to keep track of increases steadily. By taking advantage of test automation and keeping track of all the testing processes, we don’t need to worry about the timing of the tests, we can spend more time testing and analyzing the behavior of software.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.

Fakes, Mocks, and Stubs in Unit Testing

In the blog post Test Doubles — Fakes, Mocks and Stubs., Michał Lipski describes and implements three different types of test doubles: fake, stub, and mock, and the situations in which you would need to use them in your unit testing. He also provides useful examples for each of the three terms.

Here’s a useful image found in his blog that helps to understand the differences between the three:

Fake – object that has a working implementation, but not the same implementation as the ones in production. Usually they take some shortcuts are are simplified versions of the production code. This is useful because you’re able to do integration test of services without starting up a database or performing time consuming requests.

Stub – object that holds predefined data and uses it to answer calls during tests. This is useful when we can’t or don’t want to involve objects that would answer with real data or have undesirable side effects.

Mock – object that registers calls as they are received. This is useful when we don’t want to invoke production code or when there is no easy way to verify that the intended code was executed.

The reason I chose this blog post in particular is because the first time I was exposed to stub functions was earlier in my internship while working on unit tests, and at the time I didn’t have a great understanding of what their purpose was. I was recently reminded of them last week during lecture, so I decided that it’d be a useful topic to read about. At the time I wasn’t aware that there were different types of test doubles, so I found that the examples found in Lipski’s blog are really helpful in understanding the key differences between them, as they share many similarities and are likely to get mixed up. According to Lipski, misunderstanding and mixing test doubles implementations may influence test design and increase fragility of tests, standing in our way to seamless refactoring. So I think that this blog may be a useful resource if you were trying to write clean unit tests.

Source: Test Doubles — Fakes, Mocks and Stubs.

From the blog CS@Worcester – Andy Pham by apham1 and used with permission of the author. All other rights reserved by the author.

Defending against Spectre and Meltdown attacks

http://news.mit.edu/2018/mit-csail-dawg-better-security-against-spectre-meltdown-attacks-1018

In January the security vulnerabilities Meltdown and Spectre were discovered. These vulnerabilities were born not from the usually way of software or physical CPU problems but from the architecture of the CPU itself. This means that large amounts of people, buisnessess and more were vulnerable.  With this new method of defense it is much harder for hackers to get away with such attacks. This method of defense may also have an immediate impact on fields like medicine and finance who limit their use of cloud computing due to security concerns. With Meltdown and Spectre, the attackers took advantage of the fact that operations can take different times to compute. For example, someone trying to brute force a password will look at how long it takes for a wrong password to compute and then compare it to another entry and see if it takes longer. If it does then something in the entry that took longer will have a correct number or letter. The normal defense to this attack is Cache Allocation Technology (CAT), which splits up memory so that it is not stored all in one area. Unfortunately this method is still quite insecure because things are still visible to all partitions. This new approach is a form of secure way partitioning called Dynamically Allocated Way Guard (DAWG). Since it is dynamic it can split the cache and then change the size of those different pieces over time. DAWG is able to fully isolate one program from another through the cache and still has comparable performace to CAT. It is able to establish clear boundaries for programs so that when sharing should not happen it does not, this is helpful for programs with sensitive information.

The article mentions that these microarchitectural attaks are becoming more common because other methods of attack have become more difficult. I thought that was interesting because it seems like a relatively new method and a new security risk that has not had time to receive development for security. This is an issue that can effect anyone and is a serious problem. On top of that, performance is a big concern with this security since is deals directly with the CPU and its architecture which is not an easy fix. The article also points out that because of these attacks, more information sharing between applications is not always a good thing. I find this pretty interesting since a large number of different applications made by the same company now have information sharing capabilities such as the microsoft umbrella of software. Sharing information between things can actually put you at more of a risk than it is worth saving time by sharing things.

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.

Test Scenario vs Test Case

https://reqtest.com/testing-blog/test-scenario-test-case/

This blog post compares two important aspects of software testing: test scenarios and test cases. A test scenario is a high-level documentation of a use case. It is used to make sure that the end-to-end functioning of the software is working correctly. With this type of testing, clients, stakeholders, and developers help the testers create scenarios that ensure the software works as intended. Test scenarios look at software from the point of view of the user to determine real world scenarios and use cases. Some important reasons to use test scenarios are:

  • They help validate that software is working correctly for each use case
  • They help determine the real-world use of the software
  • They help find discrepancies and improve the user experience
  • They save time, money, and effort
  • They are vital in evaluating the end-to-end functionality of the software
  • They help build better test cases because the test cases are derived from the scenarios

A test case is a set of conditions that help determine whether the software being tested satisfies requirements and works correctly. It is a single executable test that contains step-by-step instructions to verify that software functions the way it’s supposed to. A test case is used to validate a test scenario. Normally, a test scenario contains multiple test cases which contain information on how to test the scenario. This information includes prerequisites, inputs, preconditions, expected results, and post-conditions. Test scenarios are extracted from user stories and test cases are extracted from scenarios.

Both test scenarios and test cases should be used to ensure a high test coverage. As agile practices become more common, test scenarios are being used more and more.

I thought that the content of this blog was interesting and useful. I learned the difference between test scenarios and test cases and why both of them are used. Since agile development environments are becoming so common it is very useful to understand what test scenarios are. It was interesting to learn how test scenarios and test cases are related because I had no idea what differentiated them before I read this post. Overall this was an informative article that I enjoyed reading.

From the blog CS@Worcester – Computer Science Blog by rydercsblog and used with permission of the author. All other rights reserved by the author.

The Process of Designing a Product

This week I read a post of Joel Spolsky, the CEO of Stack Overflow. This post talks about an approach of designing a software product that is “Activity Based Planning.” The main idea of this method is to figure out the activity that the user is doing and focus on making it easy to accomplish that activity. Some examples will show how to apply this approach in designing a product. First example, you’ve decided to make a web site that lets people create greeting cards. Using a somewhat naïve approach, you might come up with a list of features like this: Add text to card, Add picture to card, Get predesigned card from library, Send card (Using email or printing it out). This way of thinking would lead to a program that starts out with a blank card, with menu items for adding text, pictures, loading cards from a library, and sending cards. And then what the user is going to have to do is sit down and browse through the menus, trying to figure out all the commands available, and then do their own synthesis of how to put these atomic commands together to create a card. Now, with an approach of activity based planning, you need to come up with a list of activities that users might do. So, you talk to your potential users, and you come up with this “top three” list: Birthday Greeting, Party Invitation, and Anniversary Greeting. Now, instead of thinking about your program from programmer perspective (in terms of what features you need to have to make a card), you’re thinking about it like the user, in terms of, what activities is the user doing, specifically:

  1. Sending a birthday card
  2. Planning a party, and inviting people to it
  3. Sending an anniversary card

Suddenly, there are new ideas of designing. Instead of starting with a blank card, you might start with a menu like this:

What do you want to do?

  • Send a birthday card
  • Send an anniversary card
  • Send a party invitation
  • Start with a blank card

Suddenly users will find it much easier to get started with your program, without browsing around on the menus, since the program will virtually lead them through the steps to complete the activity. The three activities suggest some great features which you might want to add. For example, if you’re sending a birthday or anniversary card, you might want to be reminded next year to send a card to the same person, so you might add a checkbox that says “remind me next year”.

Activity based planning is even more important when you are working on version two of a product that people are already using. We should observe a sample of customers to see what they are using your program for and which activities they go with your program. We could add more activities to program or make existing activities more suitable to certain groups of customers. Therefore, activity based planning is helpful in the initial version of your application, where you have to make guesses about what people want to do, but it’s even more helpful when you’re planning the upgrade, because you understand what your customers are doing.

In conclusion, designing good software takes about six steps:

  1. Invent some users
  2. Figure out the important activities
  3. Figure out the user model— how the user will expect to accomplish those activities
  4. Sketch out the first draft of the design
  5. Iterate over your design again and again, making it easier and easier until it’s well within the capabilities of your imaginary users
  6. Watch real humans trying to use your software. Note the areas where people have trouble, which probably demonstrate areas where the program model isn’t matching the user model

 

Article: https://www.joelonsoftware.com/2000/05/09/the-process-of-designing-a-product/

From the blog CS@Worcester – ThanhTruong by ttruong9 and used with permission of the author. All other rights reserved by the author.

Understanding the idea of Behavioral-Driven Development

So for this week, I have decided to read about “Behavioral-Driven Development” from the Future Processing blog. The reason I have chosen this blog is because usually I hear this development does help with the established practices of Test-Driven Development and make it more accessible and effective. This will help me understand why some have suggested in using this development and see the problems with it that make it difficult to give even an introduction.

For this blog post, it goes over the motivations of introducing Behavioral-Driven Development, why to use it, and the typical problems with it. With the motivations listed, there is public distress, test automation, and better Test-Driven Development. Public distress is from those who want it to be introduced for the sake of introducing it, teat automation is from wanting to automate tests which the development does not require automation, and better test-driven development is from that the development is only understood as a higher layer of requirements.  There are many reasons listed in why this development should be used but the main thing is it is a communication tool. This tool for the main reasons listed can help to answer the question of how the problem will be solved, clarifies when we consider that the program solves our problem, and discovers what should happen when an unusual scenario comes. When it comes to typical problems with this development, there are incomprehensible scenarios such as no explicit dictionary, these scenarios becoming an unnecessary overhead, and the use of it in teams.  In conclusion, ideological use of this development is demanding and difficult. But it is always worth trying and adapting technics like this communication tool and it might lead to the right software for the needs of a client.

What I think is useful about this blog is it goes all the way to express this development as a communication tool. From this blog, it goes over the way how this development introduced generally, gives lists of the uses of the development and the problems with it, and even has a scenario to show how it works with further explanation. The content of this blog has definitely change my way of thinking with this development.

Based on the content of this blog, I would say that this blog is a little difficult to understand at first if you don’t know about Test-Driven Development. However, I don’t disagree with any of this content because it clarifies some things about Behavioral-Driven Development such as understanding the perspective of a user with this development. For future practice, I will try to use this development with a given when and then template.

 

Link to the blog: https://www.future-processing.pl/blog/behaviour-driven-development/

From the blog CS@Worcester – Onwards to becoming an expert developer by dtran365 and used with permission of the author. All other rights reserved by the author.