Category Archives: Week 7

Controlling Your Environment Makes You Happy

This week I read a post of Joel Spolsky, the CEO of Stack Overflow. This post talks about user interface programming for software. Most of the hard-core C++ programmers Joel knows hate user interface programming. This surprised him, because he found UI programming to be quintessentially easy, straightforward, and fun. It’s easy because you usually don’t need algorithms more sophisticated than how to center one rectangle in another. It’s straightforward because when you make a mistake, you immediately see it and can correct it. It’s fun, because the results of your work are immediately visible. You feel like you are sculpting the program directly. Most programmers’ fear of UI programming might come from their fear of doing UI design. They think that UI design is like graphics design, which is the mysterious process by artistic-minding people creates cool looking artistic stuff. Programmers see themselves as analytic, logical thinkers, being strong at reasoning, weak on artistic judgment. Therefore, they think they can’t do UI design. However, UI design is quite rational. It’s not a mysterious matter that requires a degree from an art school. There is a rational way to think about user interfaces with some simple, logical rules that you can apply anywhere to improve the interfaces of the programs you work on.

User interface interacts with users working with software. UI is important because it affects the feelings, emotions, and mood of users. If the UI is wrong and the user feels like they can’t control software, they literally will not be happy and they’ll blame it on software. If the UI is smart and things work the way the user expected them to work, they will be cheerful as they manage to accomplish small goals. So, UI must respond to the user in the way in which the user expected it to respond; other way the user is going to feel helpless and out of control. A psychological theory called Learned Helplessness, developed by Dr. Martin E. P. Seligman, is that a great deal of depression grows out of a feeling of helplessness: the feeling that you cannot control your environment. To make people happy, you have to let them feel like they are in control of their environment. To do this, the user interface needs to correctly interpret user’s actions.

The post gives us a thinking about designing and programming user interfaces for software. UI design should be considered as rational and logical process rather than a mysterious process that needs highly artistic judgment. UI must behave in a way that users feel like they are able to control the environment when they are using software. The cardinal axiom of all user interface design: “A user interface is well-designed when the program behaves exactly how the user thought it would.”

Article: https://www.joelonsoftware.com/2000/04/10/controlling-your-environment-makes-you-happy/

From the blog CS@Worcester – ThanhTruong by ttruong9 and used with permission of the author. All other rights reserved by the author.

Composite Design Pattern: What It Is and How To Use It

We just wrapped up a section going over various design patterns used in software, so I wanted to do a bit more research on a different design pattern to get more of an idea about how they work. One pattern that looked interesting was the composite design pattern (https://howtodoinjava.com/design-patterns/structural/composite-design-pattern/), which appeared to combine a couple of concepts that I had seen from taking data structures: inheritance and trees. This structure of the composite design pattern essentially takes on the form of a hierarchy, with 4 types of items: a component, a leaf, a composite, and a client.

The component is the abstract entity in which common behaviors and operations are defined so that repeated code can be prevented. In the example provided in this post, a program to simulate the retrieval of banking information, the “component” expressed a list of other Component objects, methods to add and remove from such list, as well as other abstract and non-abstract methods that can be utilized by other classes extending from the component.

The composite and the leaf (or leaves) both inherit from the component. For this example, the CompositeAccount class served as the composite. This class implements the methods defined as abstract in the Component class. Because the composite and the leaves all inherit from Component, all of these objects from the different classes can be defined as Component objects. Thus, in the CompositeAccount class methods, all of the leaf objects are taken into account by traversing through the list of Component objects. For this example, the DepositAccount and SavingsAccount were leaves, with their own behaviors that were exclusive to the leaves.

Finally, the client puts it all together as the driver for this design pattern. New objects are created in this class, and each individual object is treated appropriately with the correct behaviors (i.e. the DepositAccount and SavingsAccount), while also being part of the bigger system, in this case the CompositeAccount, all of which are underneath the Component umbrella.

This was a great read for me! It was simple enough to understand coming from an extended introduction to design patterns and other programming courses using this kind of structure. I’m currently reading more about the Decorator design pattern for an assignment, so I hope to post more about that soon!

From the blog CS@Worcester – Hi, I'm Kat. by Kat Law and used with permission of the author. All other rights reserved by the author.

The D-Sign of C-Sci

CS SERIES (6)In my software design course, I recently learned about how using design patterns helps you code better. I thought it would be a good review to go over the concepts this article introduces and potentially link it to things from class and maybe even add some things we did not get through during class.

The three categories Frederico Haag, a computer science engineering student at PoliMi, wrote about are creational patterns, structural patterns, and behavioral patterns.

Based on the design pattern I chose to work on for my individual assignment, I wanted to focus on the facade section–which is a structural pattern. The main take-away of the facade is how it “provides a simplified interface to a larger body of code.” I like how the name itself actually relates to the word facade’s definition: “an outward appearance that is maintained to conceal a less pleasant or creditable reality” (Dictionary.com). As a person who likes the aesthetic side of things, this seems like a convenient design pattern, especially if people who are not working on the code end up seeing it; it may be less overwhelming to some.

Another one of the options my class had for the same assignment above is for the decorator class–which is also a structural pattern. This “adds behavior to an object dynamically without affecting the behavior of other objects from the same class.” For some reason, when I imagine this concept, I think of a decorated cake. Since it is useful for adding the same behavior to many classes; it’s like when you add a spread-out layer of fondant or frosting to a cake, it could either cover the whole section(s) of cake or just some, but it doesn’t mess up the inside of the cake.

Overall, I found this content very useful to reiterate what I had learned and Haag incorporated visual UML diagram examples along with actual snippets of code to help us compare and contrast what he was showing. The content has not changed how I think about the subject because there is no arguing here, it just shows different ways people can structure their code overall. I do appreciate how Haag also listed “typical use cases” for some of them as it makes it easier to imagine.


Article: https://medium.com/federicohaag/coding-better-using-design-patterns-4d7385a9e7ac

From the blog CS@Worcester by samanthatran and used with permission of the author. All other rights reserved by the author.

Exploring Decision Tables for Software Testing

For this week’s blog post, I decided to choose a subject that would give me a little more experience with a technique for black box testing, or developing tests for software without direct access for the code for a particular application. This way, the tests are created with a bigger focus on the business logic instead of particular barriers that may arise within a programming language or the overall stack of the application. I found an article on softwaretestingclass.com that covers the usefulness of decision tables in regards to software testing. The article discusses how to use decision tables as well as why they are such an important tool for black box testing. The article also mentions other examples of testing techniques for black box testing.

The author, Venktesh Somapalli, provides an example of a financial application where there are two possible conditions for a user, repayment within a term, or moving on to the next term of a loan. Somapalli goes through the steps that a tester may consider constructing a decision table for this particular application. The table for the decision table technique is based on conditions either being true or false, so all outcomes are absolute. This means that there are there should only be one outcome for a set of conditions. However, the reverse of this is not true, there are can be more than one set of conditions that have a particular outcome. In the example in the article, if both of the conditions are true or false, an error message is the outcome. On the other hand, there is only one set of conditions that processes money, and one that processes the loan term.

My favorite part of this testing technique is that it is inherently non technical due to the nature of black box testing and the simplicity of the concept. This means that non technical people within a business can understand and even develop test cases using this technique. As Somapalli points out, this technique is also versatile because it can be applied to any set of business logic. Somapalli also notes that decision tables are iterative meaning that if any new conditions are added to the logic, the existing table can be reused and revised to consider the new logic without a complete reconstruction. I definitely agree with his arguments for the usefulness for this technique. I have used this technique in the past and before reading this article, I didn’t even consider the powerful nature of this versatile strategy. I will definitely continue to make use of this technique when developing test cases whenever possible throughout my career.

Link to the original article: https://www.softwaretestingclass.com/what-is-decision-table-in-software-testing-with-example/

From the blog CS@Worcester – The Road to Software Engineering by Stephen Burke and used with permission of the author. All other rights reserved by the author.

Journey into Design Patterns – A look into Adapter, Facade, and Memento

As I take another step into Software C.D.A. I remain in design pattern. I will point out the bases of adapter pattern and will also talk about a new pattern I just learn about the Facade pattern, and the Memento pattern. I will also point out the main difference between adapter pattern and facade pattern. I chose to do this topic because I was mainly trying to just focus on the facade pattern but my search in the internet for podcast on this topic lead me to “Design Patterns Part 4 – Adapter, Facade, and Memento by Joe Zack”. Where as the topic indicates its topic is on the design pattern of adapter pattern, facade pattern, and memento pattern. For this blog I will briefly summarize what they discuss on adapter pattern and memento since my main focus is the facade pattern. Since I must do my own research on that pattern for a homework assignment and I might as well hit two birds with one stone. That being said if you wish to listen in the podcast click on the highlighted link above of the podcast tittle.

Alright let’s dive into the facade pattern…

What is Facade Pattern?

Facade pattern is used with object-oriented programming it is an object that aids as an interface masking complex code structural.

What is facade intention?

The facade intention is to make combined interface available to a set of interfaces for example wrapper.

When to use Facade?

We use facade when there is a complex system you wish to make more simpler either by necessary or for your own personal wants in a system.

Okay now that we pointed out the basic of facade lets cover adapter bases and then compare them both.

What is Adapter Pattern?

Also known as wrapper, the Adapter Pattern is a software design patter that allows the interfaces of an existing class to be used as another interface. In software development the adapter pattern has the same concept of those in real life, for example phone adapters.  Meaning adapter patter are similar to phone power adapters in the sense that one adapter can be used with many different USB devices cable. Say you’re at your friend’s house and you have an i-phone USB charger cable but no power adapter, and let’s say your friend owns an android and only has an android charger. He can unplug is USB android cable from his power adapter and hand you his power adapter for you to use with your USB i-phone cable so you’ll be able to charge your phone. An adapter is convenient because it enables incompatible objects or devices to be used for the same purpose. The adapter pattern is also convenient because it allows you to use existing class or interface by introducing a new class that adapts between classes and interface without changing the existing class that is known as an adapter class.

 

What are the differences between facade and adapter?

Although both of them are consider to be wrappers. The difference is that an adapter only wraps one object while a facade wraps multiple. Adapter solves non-compatible interface problem by making them compatible by using an existing interface. While facade takes complicated interfaces or systems and transform them into simpler interfaces/subsystem by defining a new easier interface a wrapping them up.

What is the Memento Pattern?

The memento pattern has the ability to undo its previous state by restoring the object. Also described as “undo via rollback”. As described in the podcast memento aims to “rolling back changes to an object”.

 

How is memento pattern implemented and how it works?

The memento pattern has three objects of implementation known as the originator, a caretaker, and a memento. The originator has an internal state, a caretaker is able to undo the change and does something with the originator, but the care taker must first ask the originator for a memento object. The idea of “undo via rollback” is done once the caretaker returns the memento object to the originator. Another way to think about how memento is implemented is by taking a snapshot of the internal state and then you can restore your object to the internal state by passing that data back (this is similar to how describe in the podcast).

 

The memento pattern is categorized as a behavior design pattern. Both facade and adapter are categorized as structural design pattern but are used for different reasons. I will end this blog by saying Design pattern is a very important skill to have for software developers. That being said do your research on Design Pattern and look for examples and understand how it works in order to be a successful software developer.

Thank you for your time. This has been YessyMer in the World of Computer Science, until next time.

From the blog cs@Worcester – YessyMer In the world of Computer Science by yesmercedes and used with permission of the author. All other rights reserved by the author.

Boundary Value and Equivalence Partitioning

For today’s blog I am going to be referencing another blog called “What is Boundary Value Analysis and Equivalence Partitioning?” by Ulf Eriksson. The article talks about boundary value testing being a way to test values between the valid input ranges in test cases. Boundary testing is important so that we as testers can figure out where our valid input range lies, and we do not mistakenly use inputs outside of the valid range (the invalid range). This is also important to do so that if an end user does enter an invalid input, then the program knows how to handle it rather than crashing and burning. One other area the article talks about is called equivalence partitioning. This one is a little different from boundary value testing in that it divides the test into a range of values and selects one input from each range. This is a form of black box testing because you are testing the value without knowing what is going on inside and in turn, not knowing the exact output. There is, however, an expected output that you determine before testing and it will be proved either true or false based off the actual output of the test. The best way to differentiate between boundary value and equivalence partitioning is summed up very nicely in the article. The basic idea of the point being made in the article is that boundary value testing is testing for the valid range of inputs whereas equivalence partitioning is slicing that valid input area (as determined in the boundary value tests) into equal parts and selecting one value from each partition to test. I think this is is a very constructive way to test within a valid value range because you are getting a “good spread” of test values. In other words, the test values you are getting will be from across the spectrum of valid test values because of the equal partitions and selecting a value from each partition. I have been using this idea of equivalence partitioning in class by selecting test values such as min, min+, nom, max-, and max. We have learned in my software class that testing from each of these values will give you a good showing of how accurate your tests are and how good your tests are. Thinking about these five test values as equivalent partitions within the valid boundary range will help me find these values easier in the future.

Link: https://reqtest.com/testing-blog/what-is-boundary-value-analysis-and-equivalence-partitioning/

 

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.

Journey into Clean Code

As I take another step towards Software Quality Assurance Testing. I start to think and learn about how would I write a good unit test. That lead me to a podcast about “Clean Code – How to write amazing unit test” by Joe Zack. This podcast does a very good job on explaining the idea of clean code and also about how to write amazing unit test. This podcast also touches Test-Driven Developing.

The blog first starts out talking about a few interesting things however unrelated to the topic it finally gets into the topic 17 min into the podcast. For this blog I am only going to touch on the clean code part of the podcast to hear the full podcast please click on the following link https://www.codingblocks.net/podcast/how-to-write-amazing-unit-tests/#more-2483 otherwise…

The podcast mentions the following things about clean code:

There are a few problems with keeping test clean. For example, keeping test clean could outgrow your prod code and become unmanageable. However, there are more problems with having dirty code. Such as when/if code changes the test must also be change causing double work and harder to change. If test is extremely dirty it could become a liability.

Clean test is important because it keeps all test code readable, simple, clear, maintainable, and reusable. When you have tests it makes it easier to change code and less scary. A clean test makes it easier to improve the architecture.

A great way to have a clean code is by using the “Build – Operate – Check Pattern”. Which stands for building up the data, operate on the test data, and check by insuring that the operation yielded the expected results. Test should be written in a specific way so it will be able to be used in different testing platform. Another way to have clean code is to only have one assertion per test. Since, it makes the test easier to read. Although sometimes using multiple assertion is necessary and more beneficial to do so. Clean code can also be achieved by making sure to have a single concept per test. This idea is actually more important then having a single assertion. That is because it makes sure to only test related things, instead of un-related items.  A good general practice to follow for clean code is to remember “FIRST”. The F in first stands for Fast, meaning a test should run quickly and be fast. The I in “FIRST” stands for Independent, meaning each test must be independent. The R in “FIRST” stands for Repeatable, meaning test must be repeatable in different environment without the need of infrastructure specification. The S in first stands for Self-validating, meaning a Boolean output or either true or false is required. The T in fast stands for Timely, meaning a test must be written in a timely manner particularly before writing the production code in order to ensure they are easy to code against.

To sum this up practicing clean code is a very good idea to practice. From the testing to the code production clean code has more benefits then not. It is important to create unit test in order to be confident when needing to change the production code. Over time this practice will prove to be more beneficial since it allows improvements to be flexible and maintainable.

 

From the blog cs@Worcester – YessyMer In the world of Computer Science by yesmercedes and used with permission of the author. All other rights reserved by the author.

B6: Test Automation vs Automated Testing

https://www.qasymphony.com/blog/test-automation-automated-testing/

          The blog post I wanted to talk about today covers the difference between Test Automation and Automated Testing while also explain new concepts like Continuous Testing. The post starts by explaining that there are two types of automation known as Automated Testing and Test Automation. It defines Automated Testing as the conduction of specifics tests through automation while Test Automation is the automation of tracking and managing different tests. It continues to talk about Test Automation and why it is critical to continuous testing. Continuous testing ensures that the test quality is as high as it can be at all times. Usually tests are completed at the end of the development cycle but now tests are done throughout the development cycle whenever they are needed. However, in the real world, testers need to verify and schedule test cases which means they have to be in communication with other members of the team and the product owner to make sure that the original product requirements are still being met. They must break down these requirements to write the tests and track the progress of each test which can take time. These models make continuous testing a viable option and it works very efficiently but now testers need to think about test management more carefully if they are going to integrate it into development.

        I found this article interesting because of the new concepts it showed me for software development and testing. I enjoyed the enthusiasm this post had for Test Automation and Continuous Testing because it paints these concepts in a light that really makes it seem like this is a new step forward in making any generic development cycle more efficient. The post explained the definitions of these terms clearly and then went into detail about their integration into testing while also explain how that would affect the overall development cycle. The most interesting part of this post was when they explained the pros and cons of this testing integration. I found that the pros of having a more efficient development cycle made sense, but I would never have thought about the amount of work that the testers would have to go through as well to make this work. This showed me that even though the system would ideally work, there is a vital part of communication as always in a development cycle that can help or hurt the process. I found this to be a good source of information to learn about how testing is woven into development and through the explanations, shows the reader that there can always be new ways to make testing more efficient.

From the blog CS@Worcester – Student To Scholar by kumarcomputerscience and used with permission of the author. All other rights reserved by the author.

Learning about Test-Driven Development

So for this week, I have decided to read “What is Test-Driven Development?” from the Rainforest blog. The reason I have chosen this blog is because from what I understand of test-driven development, it is hard to apply in practice and requires a lot of time when doing this process, the first time. This will help me in understanding the advantages and disadvantages of using this process.

For this blog post, it goes over what the benefits of test-driven development are, who needs the development in question, how does it work, and the disadvantages of using it. Test-driven development is a software process that follows a short, repetitive, and continuous cycle of creating unique cases for companies want in their applications. Unlike traditional software testing, test-driven development implements testing before and during development. The benefits provided in this process are quickly sending quality code in production, efficiency building coverage on the building’s application, and reducing resources required for testing. This development is good for teams for fast release cycles but still want to ensure their customers are receiving quality results and teams with little practice in-house QA practices instilled but still value quality. The disadvantages are product and development teams must be in lock step, difficulty maintaining transparency about changes, and initially time sensitive.

What I think is interesting about this content is it does give scenarios for each part to express the process and why companies would use this development. From the blog, it breaks down to the meaning of the titles, giving the details of the scenarios, and finally the charts to give the idea of the process when working together. The content has changed my way of thinking in understanding the process and not believe that it is very difficult in introducing this process to a person that is learning programming the first time.

Based on the content of this blog, I would say this is straightforward to understand once going over the fundamentals a few times. I do not disagree with this content given by this blog because it helped me understand the idea of test-driven development with the short descriptions and charts that show it. For future practice, I shall try to refer this process when teaching those who have difficulty with programming.

Link to the blog: https://www.rainforestqa.com/blog/2018-08-29-test-driven-development/

 

From the blog CS@Worcester – Onwards to becoming an expert developer by dtran365 and used with permission of the author. All other rights reserved by the author.

Load Testing

https://reqtest.com/testing-blog/load-testing/

This blog post is an in-depth look at load testing. Load testing is a type of performance testing that identifies the limits to the operating capacity of a program. It is usually used to test how many users an application can support and whether the infrastructure that is being used can support it. Some examples of parameters that load testing can identify include response time, performance under different load conditions of system or database components, network delay, hardware limitations, and issues in software configuration.

Load testing is similar to stress testing, but the two are not the same. Load testing tests a system under peak traffic, while stress testing tests system behavior beyond peak conditions and the response of the system when it returns to normal load conditions. The advantages of load testing include improved scalability, reduction in system downtime, reduction in cost due to failures, and improved customer satisfaction. Some examples of load testing are:

  • Testing a printer by printing a large number of documents
  • Testing a mail server with many concurrent users
  • Testing a word processor by making a change in a large volume of data

The most popular load testing tools are LoadView, NeoLoad, WebLOAD, and Load Runner. However, load testing can also be done without any additional tools. The steps to do so are:

  • Create a dedicated test environment
  • Determine load test scenarios
  • Determine load testing transactions for the application
  • Execute the tests and monitor the results
  • Analyze the results
  • Refine the system and retest

This blog post did a good job at defining load testing and explaining why it is an important process. I had heard of stress testing previously, so I appreciated how the author described the differences between the two. The part that I thought was most useful was the section on the advantages of load testing. It makes it easy to see why load testing is important if you are building something like a web app. It was also useful to read about some of the most popular tools because I now have a reference if I ever need to do this kind of testing in the future. I’m glad I read this blog as load testing seems like a very important practice in the field of software testing.

From the blog CS@Worcester – Computer Science Blog by rydercsblog and used with permission of the author. All other rights reserved by the author.