Category Archives: Week 9

Unified Modeling Language (UML)

curso-de-uml-2-Nowadays, Unified Modeling Language has made it easier to describe the software systems, business systems, and any other systems. Their graphics show an explanation with words and pictures also, which proves that UML is practical and anybody should be able to use it. UML first appeared in 1997 and its content is controlled by the Object Management Group. The primary contributors to UML are Grady Booch, James Rumbaugh, and Ivor Jacobson.

UML Basic Notation

notation_class

Why UML?

UML has been able to unify the terminology and different notations, which leads to a great communication between all parties, on different departments of any company. It is also much easier for co-workers to get access or transfer information, while they are working on the same project. UML seems to be a very powerful modeling language, which makes it perfect to use, even for small projects. Stereotypes can extend its functionality if it is not sufficient for some kind of projects. UML did not come from anywhere, it started from real-world problems with existing modeling language that needed modification or transformation. This is why it is widely supported because it guarantees usability and functionality, based on real-life problems.

UML 2.0 defines 13 different types of diagrams, where each of them may be expressed with different details. They are classified into three categories: Structural diagrams, The Behavioral Diagrams, and The Interaction Diagrams.

–          The Structural Diagrams represent elements that are static in nature and they can be fundamental to the UML modeling of a system. This contains:

The Class diagram, The Component diagram, The Composite Structure diagram, The Deployment diagram, The Object diagram, The Package diagram.

–          The Behavioral Diagrams represent the modeling of how the system functions. This contains:

Use Case Diagram, Activity Diagram, State Machine Diagram.

–          The Interaction Diagrams represent how the flow of data and control work on the modeling system. This contains:

Communication Diagram, Sequence Diagram, UML Timing Diagram, Interaction Overview Diagram,

 

As a conclusion, the Unified Modeling Language is an internationally accepted standard that is used for Object-Oriented Modeling and can be used to represent a model that adopts the best software architecture practices.

References:

https://commons.wikimedia.org/wiki/Unified_Modeling_Language

https://en.wikipedia.org/wiki/Unified_Modeling_Language

https://www.geeksforgeeks.org/unified-modeling-language-uml-introduction/

From the blog CS@Worcester – Gloris's Blog by Gloris Pina and used with permission of the author. All other rights reserved by the author.

B7: Black-Box vs. Gray-Box vs. White/Clear-Box Testing

http://blog.qatestlab.com/2011/03/01/difference-between-white-box-black-box-and-gray-box-testing/

          I found an interesting bog post this week that talked about the differences between White Box, Black Box, and Gray Box Testing. It started with black box testing by explaining that it is an approach where the tester has no access to the source code or any part of the internal software. The basic goal of this testing type is to make sure that inputs and outputs work from the point of view of a normal user. The blog goes on to talk about the main features of this testing type by explaining that the test design is based on the software specification and detects anything from GUI errors to control flow errors. It is a short prep test which helps make the whole process much faster but lacks the needed detail to test individual parts of the software. The blog goes on to talk about white box testing which allows the tester to have access to the source code. This testing allows the tester to know which line of code corresponds with which functionality. This allows more detail for individual tests on code functionality. It allows more detailed testing with the ability to anticipate potential problems but takes more time and can be complex/expensive. As for grey box testing, the blog explains that it is an approach where testers only have a partial understanding of the internal structure. The advantages and disadvantages of this technique are incorporated from the related white and black box testing.

       I chose this article because I remember that we learned about this subject in the beginning of the year. I wanted a better knowledge about the advantages and disadvantages which is what sparked my initial curiosity. I found that this content was really interesting because it explains how testing can be when dealing with different amounts of access to data. I enjoyed how the post explained the types of testing sequentially while also explain how they can be different from each other. I was able to grasp an understanding of these testing types while also understanding the importance and vital role that they play depending on the situation. The most interesting part of the blog post was the grey box testing because it combines the aspects of black and white box testing. It also deals with regression testing and matrix testing which are very important when testing code bits of code at a time. I found that the diagrams used in the post allowed an easier flow to the readings which helped me understand it better. I found the blog to be a great source that summarized and simplified the detailed ideas within the post.

From the blog CS@Worcester – Student To Scholar by kumarcomputerscience and used with permission of the author. All other rights reserved by the author.

Visitor Design Pattern

Visitor Design Pattern

For this week’s blogpost I will be discussing another design pattern, this time I will talk about the Visitor Design Pattern discussed on Source Making’s website. Essentially the Visitor design pattern allows you to add methods to classes of different types without much altering to those classes. Allowing you to make completely different methods depending on the class used. Because of this you can also define external classes that can then extend other classes without majorly editing them.   Its primary focus is to abstract functionality that can be applied to an aggregate hierarchy of element objects. This promotes designing lightweight element classes due to the processing functionality being removed from the list of their responsibilities. New functionality can be added later easily by creating a new Visitor subclass. The implementation of Visitor beings when you create a visitor class hierarchy that defines a pure virtual visit() method in the abstract base class for each of the concrete derived classes in the aggregated node hierarchy. Form here each visit() method accepts a single argument – a pointer or reference to an original Element derived class. In short, each operation to be supported is modelled with a concrete derived class of the visitor hierarchy. Adding a single pure virtual accept() method to the base class of the Element hierarchy allows accept() to be defined to receive a single argument. The accept method also causes flow of control to find the correct Element subclasses, once a visit method is involved the flow of control is vectored to the correct Visitor subclass. The website listed below goes into much more detail into what exactly happens but here I summed it up so that most should be able to follow. But essentially the visitor pattern makes adding new operations easy, simply add a new Visitor derived class but if subclasses are not stable keeping everything in sync can be a bit of a struggle.

Example of Visitor Design Pattern

The example they use in the article is that based around the operation of a taxi company. When somebody calls a taxi company (literally accepting a visitor), the company dispatches a cab to the customer. Upon entering the taxi, the customer, or Visitor, is no longer in control of his or own transportation but the taxi driver is.

All in all, I thought this website was really good at explaining what the Visitor Design Pattern is. I have used this website before for previous research into design patterns and more.

 

https://sourcemaking.com/design_patterns/visitor

From the blog CS@Worcester – Matt's Blog by mattyd99 and used with permission of the author. All other rights reserved by the author.

Keep Software Design Simple

Good day my dear reader! With it being quite deep in the semester now, one can correctly assume that I have learned quite a bit regarding Software Design and I have. With all this new information, I read an article that has helped me put all of it in perspective. This article was Simplicity in Software Design: KISS, YAGNI, and Occam’s Razor from the Effective Software Design blog.

This particular blog post is all about keeping your software design simple and details three different design principles to keep in mind while designing software and the effects of making a design too complex and conversely too simplistic.

The first principle is, “Keep it simple, stupid” more commonly known as KISS heralding that keeping designs simpler is a key to a successful design.

“The KISS principle states that most systems work best if they are kept simple rather than made complex; therefore simplicity should be a key goal in design and unnecessary complexity should be avoided.”

The next principle is, “You Aren’t Gonna Need It” or YAGNI stating that functionality should be added only when it is actually needed and not before.

The last principle is Occam’s Razor, saying that when creating a design, we should avoid basing our designs in our own assumptions.

Not following these principles can result in complex designs that lead to feature creep, software bloat, and over-engineering. It could alternatively result in oversimplification where the design is too simplistic. This could lead to trouble down the line in areas such as maintainability, extensibility, and reusability.

Reading this blog post, made me sit back and think about my previous programming assignments. Looking back, my programs have been indeed, overly complex. A great quote provided in the blog post is from Brian Kernighan.

“Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.”

This gave me a good chuckle and is something I can definitely agree with and have done in the past. I will admit, that I had never considered the consequences of oversimplification. When the majority of your programs are single use for a specific scenario, you never really have to consider the consequences outside of getting the program done. An excellent quote provided by the author in the blog is from Albert Einstein.

“Keep it simple, as simple as possible, but not simpler.”

I completely agree with the author here that this quote expresses the danger of simplistic design that must be considered. It is easier to make things too simplistic rather than hitting the sweet spot of as simple as it needs to be.

This article will be one that I intend to keep in mind whenever I sit down to start a new programming project, especially as class projects start to roll up this time of the semester and I do believe that simple software design will greatly improve any designs that I come up with, keeping KISS, YAGNI, and Occam’s Razor in mind.

From the blog CS@Worcester – Computer Science Discovery at WSU by mesitecsblog and used with permission of the author. All other rights reserved by the author.

On the Differences of Test Doubles

In his post “Test Double Rule of Thumb“, Matt Parker describes the five types of test doubles in the context of a “launchMissile” program. Using a Missile object and a LaunchCode object as parameters, if the LaunchCode is valid, it will fire the missile. Since there are dependencies to outside objects, test doubles should be used.

First, we create a dummyMissile. This way we can call the method using our dummy, and test the behavior of the LaunchCodes. The example the Parker gave us is when testing given expired launch codes, the missile is not called. If we passed our dummy missile and invalid LaunchCodes to the function, we would be able to tell if the dummy variable was called or not.

The next type of test double Parker explains is the spy. Using the same scenerio, he creates a MissleSpy class which contains information about whether or not launch() was called, the flag would be set to true, and we could determine if the spy was called.

The mock is similar to the spy. Essentially, a mock object is a spy that has a function to verify itself. In our example, say we wanted to verify a “code red”, where the missile was disabled given an invalid LaunchCode. By adding a “verifyCodeRed” method to our spy object, we can get information from the object itself, rather than creating more complicated tests.

Stubs are test doubles that are given hard-coded values used to identify when a particular function is called. Parker explains that all these example tests have been using stubs, as they all return a boolean, where in practice real launch codes would be provided. But we do not need to know about the LaunchCodes to test our LaunchMissile function, so stubs work well for this application.

The final type of double Parker describes is the fake. A fake is used when there’s a combination of read and write operations to be tested. Say we want to make sure multiple calls to launch() are not satisfied. To do this, he explains how a database should be used, but before designing an entire database he creates a fake one to verify that it will behave as expected.

I found this post helpful in illuminating the differences between the types of doubles, as Parker’s use of simple examples made it easy to follow and get additional practice identifying these testing strategies.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.

The Four Levels of Testing in Software

For this week, I have decided to read “Differences between the different levels of testing” from the ReqTest blog. The reason I have chosen to read this blog is because it is crucial to understand the basis for each testing level. It will help in understanding the system process in terms of test levels even if it is short and simple.

This blog post basically goes over the four recognized levels of testing. They are unit or component testing, integration testing, system testing, and acceptance testing. Unit or component testing is the most basic type of testing and is performed at the earliest stages of the development process. It aims to verify each part of the software by isolating it and then perform tests for each component. Integration testing is testing that aims to test different parts of the system to access it if they work correctly together. It can be adopted as a bottom up or top down method based on the module. System testing is testing all components of the software to ensure the overall product meets the requirements specified. It is a very important step in the process as the software is almost done and need confirmation. Acceptance testing is the level of testing whether a product is all set or not. It aims to evaluate whether the system compiles with the end-user requirements and if it is ready to be deployed. These four types of testing should not only be a hierarchy but also as a sequence for the development process. From all these testing levels that show the development process, testing early and testing frequently is well worth the effort.

What I think is interesting from this blog is the simplicity of explaining each testing level. Each testing level in the blog has a small definition in what they suppose to do, when they are used, and an example in the process. This content has changed the way I think it would work by giving the explanations in a format that is not too complicated to follow.

Based on the contents of this blog, I would say that this blog is short and very easy to understand. I do not disagree with the content given by this blog because the ideas given for the testing levels do connect for the development process. For future practice, I shall try to adopt a mind of constant alertness to my projects with these four levels. That way, I can be more aware for detecting software errors.

Link to the blog: https://reqtest.com/testing-blog/differences-between-the-different-levels-of-tests/

From the blog CS@Worcester – Onwards to becoming an expert developer by dtran365 and used with permission of the author. All other rights reserved by the author.

Writing Great Unit Tests

In the blog post Writing Great Unit Tests: Best and Worst Practices, Steve Sanderson talks about the best and worst practices when writing unit tests. He goes over the true purpose of unit tests (each examining a unit of your code separately and as a whole cohesively working together to provide value that is more complex and subtle than the sum of its independently tested parts, not for finding bugs), as well as the purpose of integration tests (automate the entire system to detect regressions). At the end of his post, he also gives several useful tips for writing great unit tests, such as making each test orthogonal, or independent, to all other tests.

The reason I chose to talk about this blog post is because I think it’s definitely something that’s commonly overlooked by developers. As Sanderson said at the beginning of his post, “Even if you’re a brilliant coder with decades of experience, your existing knowledge and habits won’t automatically lead you to write good unit tests.” For people looking to get into software development, I think it’s important to learn how to write great unit tests early on so as to avoid having to clean up a self-inflicted mess in the future.

I found it interesting when he described the difference between unit tests and integration tests, as well as the problems that bad unit tests can cause. This image found in his post is useful for visualizing this:

image

The last section in which he gives practical advise for writing great unit tests is also something that I think will be useful in the future, although I think the formatting may have been messed up.

One thing that I have a hard time not necessarily agreeing with but understanding is how he said that unit testing isn’t for finding bugs. I think that, for example, if you were to make a change to the way a function performs its task (perhaps to optimize the code) while not trying to affect the end result, one of your unit tests failing because of this could be classified as “finding a bug.”

Source: http://blog.stevensanderson.com/2009/08/24/writing-great-unit-tests-best-and-worst-practises/

From the blog CS@Worcester – Andy Pham by apham1 and used with permission of the author. All other rights reserved by the author.

Anti-patterns

This blog https://effectivesoftwaredesign.com/2010/12/22/identifying-anti-patterns/ titled “Identifying Anti-Patterns” discusses what it refers to “anti-patterns”, a category of common code practices that resemble the organizational structure provided by the use of design patterns, but are actually counterproductive and not a good design. I think that the existence of anti-patterns is interesting; in an effort to write code that is well structured and easy to follow, it is actually made worse. The blog post points out that anti-patterns are most commonly used by programmers who are inexperienced and end up writing code with bad design and bad performance, but it is also possible for experienced programmers to do well in implementing a good design, but at the cost of a significant sacrifice on performance. In general I think it would be common for the implementation of a design pattern to have some performance trade-off with readability and maintainability, so there must be some line as to where a design pattern would become an “anti-pattern” if it were to cause some level of a decrease in performance. Design patterns are commonly used for the sake of scalability so that a program with a well-structured foundation will be easier to maintain as it becomes larger, but these design patterns that are implemented during the beginning of the development of the program may seem like unnecessary anti-patterns that are unnecessarily abstract for the current scope of the program. It may be difficult to identify anti-patterns given that excuses and arguments can be made for why code should be implemented in a certain way. Over-complicating things has an impact on performance, but an organized foundation is well suited for a large project, and re-implementing a lot of code as a project grows would likely be more counterproductive than being careful from the beginning. There definitely are some practices that are objectively wrong, but this blog post does not go into any examples, and it is also possible that what may be identified as an anti-pattern could be a false positive. When there is a trade-off between design and performance, it makes the most sense for an anti-pattern to refer to a mistake that is ineffective in both areas.

From the blog CS@Worcester – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

(NaN == NaN) == false

In this blog post https://medium.com/engineering-housing/nan-is-not-equal-to-nan-771321379694 “NaN is not equal to NaN!”, Dron Rathore discusses the IEEE standard of NaN being a value which is not equal to itself. The blog explains some of the definitions and implementations surrounding NaN. It is not an opinionated blog post, it is mainly for the sake of being an educational resource. My particular interest is the actual reason in the first place for why NaN is defined as not being equal to itself. The result of comparison must be a boolean, so the only options for trying to compare NaN to itself are to return true or false, or error and crash. In mathematics, NaN is effectively “undefined” or “indeterminate”, so something like 0/0 is undefined. The truth value of the equation 0/0 = 0/0 is also undefined; the operation of equality is not defined for values that are not defined themselves unless the operation itself is given additional definition to account for that case, which is what must be done for programming languages so that it results in a boolean value. The choice for that value to be false is peculiar and ultimately seems arbitrary, but it is useful for detecting values that are NaN; if (x == x) is false then x is NaN. This blog post does not directly give any feedback on the reason for this implementation of NaN, it merely describes it, but I would like to get some perspective on how the choice is made and how it is more logical to have NaN not equal to itself over an alternative implementation where it is. Comparisons involving NaN may still result in confusing outputs; infinity > NaN is false, for instance, and so is infinity <= NaN, but “not (infinity <= NaN)” is true. For the sake of software testing, NaN adds a lot of strange edge cases where assumptions about equality lead to contradictions. In these cases, or in any case where it is not okay for NaN to exist, it makes the most sense to just have errors instead of trying to deal with this unique behavior.

From the blog cs-wsu – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

Quality Assurance as a Career

I decided to take a somewhat different tack for this week’s post for software quality assurance and testing. Instead of focusing on testing itself, and all there is to it, I found a video from a tester in the field, Alan Richardson, on his advice for someone who is interested in getting into the field.
He strongly urged someone getting started not to think of a position in quality assurance as a “stepping stone” to being a software developer. If you do, you will inevitably find yourself in a dead end because you really aren’t interested in the field. “If you want to be a software developer, start as a software developer.” 
He encourages the viewer to read everything they can on the subject. A lot of it is free, so there is no reason to necessarily buy anything. However, he gives some book recommendations. He gives a good insight why books can be so valuable to learning. “An expert in the field took a year to concentrate everything they know into those pages,” (paraphrased).
He doesn’t value certification, but he realizes that many companies do. I didn’t realize there was certification. Even if I don’t end up getting it, it’s useful to know that it exists. He also said that it is easier to go into testing from designing software. 
He also urged testers to find companies that valued the work they did and that provide opportunities for them. He said that oftentimes testers are paid quite a bit less than they’re worth compared to software designers. Not only is picking the right company important, but you should advocate for yourself because the work you do is important.
I thought that he offered some sound advice. I haven’t seen that much other software quality assurance career advice, but this all seems to fit what I’ve heard over the years for computer science or general career advice. I am excited to start working for a company, perhaps one day as a tester.
Strangely, I don’t often think about all the different positions within the computer science field. I tend to lump everyone as a “software developer,” even if I know there’s a lot more to it than that. I am starting to rethink that and consider going to quality assurance. It is something I enjoy, and it is something I can see myself doing.
YouTube Channel: EvilTester – Software Testing
https://youtu.be/iOA3lxZyFwA

From the blog Sam Bryan by and used with permission of the author. All other rights reserved by the author.