Category Archives: Week 4

Testlio Changes the Game

There’s a small company out of Estonia that is making a big splash in the world of software testing.  Testlio was founded after CEO Kristel Kruustük was on the team that won the 2012.  A September 15th 2017 article by zdnet.com does a profile of the CEO and the company to provide insight on the software testing company’s mission and outlook on testing (http://www.zdnet.com/article/how-testilio-wants-to-rethink-software-testing/).  The CEO Kruustük worked at several companies around the world and noticed flaws that she saw within the software testing community.  Kruustük and her company take a different approach to their hires and their view toward junior level testers.  Testlio has found that companies that had strong diversity in their employees also had a clear improvement in their productivity.  Testlio has implemented this diversity hiring practice at their own company and have found the same increase in productivity.  Another practice that Testlio follows is putting more emphasis on junior level testers and move away from senior testers.  While Testlio has aggressively embraced this outlook on hiring youth it may not be what most people think..  Kruustük goes on to explain that this is not necessarily just the age of the tester or how long the tester has been in the business, but staying away from testers that have been in the same position and are less willing to try different methods or take advice from outside sources.  Kruustük and Testlio does not see testing as just a mundane task that has to be done, but instead as its own separate business.  This mentality has allowed Kruustük to grow Testlio into a business that has two offices, one in San Francisco and the other in Tallinn.  They also have 200 testers that work with roughly 650 million monthly users.  With the number of users and testers that Testlio has it is hard to deny that their unique approach to testing and employment process yields significant results and is part of a sustainable business model.  Testlio is a new company that is embracing junior testers that it’s found are more open to new ideas and thinking outside of the box.  I have a feeling that Testlio is a company that will continue to grow as they open their doors to a younger more diverse workforce.  They will soon become a very common name in the industry and I think will be a pioneer for a new outlook on testing, embracing diversity, and constantly bringing in new talent to their ranks.

From the blog CS@Worcester – Tim's Blog by nbhc24 and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-10-09 21:37:45

The blog post this is written about can be found here.

I chose this blog post because it directly relates to the material we learned on equivalence class testing and our assignment based around it, and because it led me to a deeper understanding of what equivalence class testing is and the ways it can be applied more broadly. In it, the blog’s writer takes apart the Wikipedia article on equivalence class partitioning from a professional tester’s perspective.

The blogger defines an equivalence class as a set of things that have some quality in common that makes them more or less equally able to reveal a certain kind of bug, if it were present in the code. He writes that equivalence classes are not necessarily limited to classes of input data, even though common definitions of the concept characterize it that way. Instead, equivalence class partitioning can apply to anything you might be thinking of doing which has variations that could influence the outcome of a test.

The blogger takes issue with the Wikipedia article saying the technique is meant to reduce the number of test cases, since (as I learned earlier) the number of test cases tells you nothing. He writes that instead, equivalence class partitioning is a method that reduces test effort; and this is only a side effect of focusing test effort, which is accomplished by making educated guesses about where the bigger bugs are most likely to be. He stresses that the technique is based heavily on our mental model of a piece of software and that it’s a fallible method of testing.

I found it particularly useful when he used the real-life scenario of pushing against a door to try to open it as an example. If we push against one part of a door and it won’t move, we won’t try pushing every other spot on the door, because we intuitively understand that most places you can push on a door are more or less equivalent. Pushing once was enough discover it was jammed, and we feel confident in assuming that pushing elsewhere within the set of door spots you can push will have the same result. This thought process is the same as the one used in equivalence class testing.

Having read this post, I will make sure to have a good understanding how a piece of code is meant to work before I set about trying to determine the equivalence classes for it, and I’ll keep in mind that the concept of equivalence class partitioning can be used with things other than inputs.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

Singletons: Bill Pugh Solution or Enum

In this blog post, Harinath Kuntamukkala discusses different approaches to the Singleton pattern. The first implementation he goes over is the Bill Pugh Solution, which is similar to the implementation we learned in class except it uses a static inner helper class as in the example below:

public class Logger {
    private Logger() {
        // private constructor
    }
    // static inner class - inner classes are not loaded until they are
    // referenced.
    private static class LoggerHolder {
        private static Logger logger = new Logger();
    }
    // global access point
    public static Logger getInstance() {
        return LoggerHolder.logger;
    }
    //Other methods
}

This would be the best approach but it’s possible for more than one instance to be created with the use of Java reflection. For example:

public class LoggerReflection {
    public static void main(String[] args) {
        Logger instance1 = Logger.getInstance();
        Logger instance2 = null;
        try {
            Constructor[] cstr = Logger.class.getDeclaredConstructors();
            for (Constructor constructor: cstr) {
                //Setting constructor accessible
                constructor.setAccessible(true);
                instance2
                    = (Logger) constructor.newInstance();
                break;
            }
        } catch (Exception e) {
            System.out.println(e);
        }
        System.out.println(instance1.hashCode());
        System.out.println(instance2.hashCode());
    }

The solution to the problem suggested by Joshua Bloch, is to use Enum. The reason we use Enum is because Java ensures that any Enum value is instantiated only once. Using Enum, this is what the Logger class would look like


public enum Logger {
    INSTANCE;
    //other methods
}

It’s still possible for more than once instance to be created if a singleton object is serialized, then deserialized more than once. In order to avoid this you can implement a readResolve() method in the Logger Singleton class:


public class Logger implements Serializable {
    //other code related to Singleton
    //call this method immediately after De-serialization
    //it returns an instance of singleton
    protected Object readResolve() {
        return getInstance();
    }
}

The reason I chose this resource is because we are currently learning about design patterns and just reviewed the singleton pattern. This post goes into the best implementations of the singleton pattern and why that is, I would like to stay on-top of the most effective, clean and efficient implementations of design patterns. I think this is a useful post, and learned about the Bill Pugh Solution and the Enum solution to ensure there’s only one instance of a singleton object. The author concluded that the Enum approach is the best solution as it is

“functionally equivalent to the public field approach, except that it is more concise, provides the serialization machinery for free, and provides an ironclad guarantee against multiple instantiations, even in the face of sophisticated serialization or reflection attacks.”

I expect to take what I learned in this article and use it whenever implementing the singleton design pattern, and I might rework the code I have now to make use of the Enum approach talked about in the article.

The post Singletons: Bill Pugh Solution or Enum appeared first on code friendly.

From the blog CS@Worcester – code friendly by erik and used with permission of the author. All other rights reserved by the author.

Integration Testing

Integration Testing

When I read the blog entry about unit testing last week, the author had mentioned about integration testing as a tool to detect regressions. Since the topic of that blog entry was unit testing, I did not have a chance to research this type of testing. Therefore, I decided to choose integration testing as the topic for this week’s entry. Because I did not know much about this type of testing, I found a blog entry that had detailed introduction of integration testing. This blog was written by Shilpa C. Roy, a member of STH team, who was working in software testing field for the past nine years. Below was the URL of the blog entry:

http://www.softwaretestinghelp.com/what-is-integration-testing/

In this blog, Shilpa introduced the definition of integration testing, its approaches, its purpose along with the steps to create an integration test. In Shilpa’s opinion, integration testing was a “level” of testing rather than a “type” of testing. She also believed that the concept of integration testing could be applied in not only White Box technique but also Black Box technique. Beside the two approaches of this testing, which were Bottom Up and Top Down, she also mentioned another approach called “Sandwich Testing”, which combined the features of both Bottom Up and Top Down approach. Moreover, Shilpa gave an example how integration testing could be applied in Black Box technique.

I thought that the “third” approach called Sandwich testing was interesting. I always thought I could only test either top down or bottom up, but this really change my way of thinking. By starting at the middle layer and moving simultaneously towards up and down, the job was divided into two smaller parts, which was more efficient. Unfortunately, this technique was complex and required more people with specific skill sets so I really doubted if I could use it in the future. But the general idea that I could start the process at the middle part would be useful.

According to Shilpa, validating the XML files could be considered as part of Integrating testing for a product that we only knew its architecture. Since the users’ inputs would be converted into an XML formats then be transferred from one model to another, validating the XML files could test the behavior of the product. In my opinion, this example was easy to understand how integration testing could be applied in Black Box technique. I always thought that it could only be available for White Box technique, but this example had proved that I was wrong. Now that I knew about it, I could try to apply it the next time I encountered similar scenarios.

From the blog CS@Worcester – Learn More Everyday by ziyuan1582 and used with permission of the author. All other rights reserved by the author.

Tips and Tricks

Source: http://www.codingdojo.com/blog/7-tips-learn-programming-faster/

Today I read up on a blog called Codingdojo. This blog was written by Stephen Sinco. In this certain blog post, he wrote a post called “7 Critical Tips to Learn Programming Faster.” In this blog post, he talks about things to help new programmers to learn programming faster. A few of those tips that caught my eye are to learn code by always doing code. When learning a new chapter, you should do little personal projects with the new methods or ways to write the code as soon as possible because that way you will be able to learn the new chapter more efficiently. It also talks about doing code by hand. This is very surprising to me because I always thought that we would always have a pc to work on. To many people this may seem like basic tips and tricks but it is very important to remember these tips.

To me this blog was very helpful especially writing the code by hand. I think I would start doing this every once in a while, to be able to learn code more efficiently. With these tips, I think it is something we all should read while we are coding because there are times when we are debugging things and we just get frustrated and we lose focus so I think it would be good to take a quick break and relax.

From the blog CS@Worcester – The Road of CS by Henry_Tang_blog and used with permission of the author. All other rights reserved by the author.

Post #5

Today I will be summarizing and providing commentary on a blog post from The Developer’s Piece called “8 design patterns that every developer should know”.  I was immediately attracted to the title but I was even more pleased when I actually read the article; it not only provides clear and concise descriptions of each pattern, it also provides examples of each pattern in practice in the form of code chunks.  Among these reasons, I primarily chose this article to be the subject of this week’s blog post because it lists patterns that we have and will cover in class.  Like I said in my last post, nothing is better than when you have thorough understanding of why you are covering a piece of material in school.

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

Limitations of Inheritance and how to solve them using composition

In java, you can use one type as template of making another type. The new type can inherit all the same behavior as the original type. This is known as inheritance. Java class inheritance allows you to reuse existing type functionality and let you make modification in behaviors as it fit. Inheritance in the other words is when you design your classes by what they are rather than what they do. This is also term as “Is-a relationship” which sometimes can be misleading. For example, an SUV car or Sedan car can inherit the supper class base Car/ Vehicle on the “Is a relationship” functionality. The same time, it could be implemented based on the “Has- a relationship” functionality because, a car has a tire, break, engine etc. so when it comes to determining which one to implement, this becomes a little bit confusing. This seems to be very bad to me because the sub classes are base on the super or base classes and when something accidentally changes from the base classes, it automatically affects the sub classes.

Composition in the other hand is a design technique use to implement has-a relationship in classes unlike an inheritance which is an “IS A” relationship classes. Composition is basically designing the class base on what they do. According to Wikipedia, composition is a principle that classes achieve polymorphic behavior and code reuse by containing instances of other classes that implement the desired functionality rather than inheriting from the base or parent class.

Again, in composition, we can control the visibility of other objects to the client classes and reuse only what we need.

In general, composition is has-a relationship between objects, it is implemented using instance variables, code can be reused, it hide visibility to client classes and more importantly is it flexible to use.

Now, what do I learn from this study? I have come to learn that implementation of composition over inheritance basically start with the creation of various interfaces representing the behaviors that the code will perform.

I have also learn that inheritance will not only encourage you to go predicting the future which may or may not work but it also encourages you to built the branches of object very early in the project and you are likely to make a very big design mistake whiles doing that because we cannot predict the future even though we feel like we could. To me, inheritance is very bad way of designing classes as sub classes may inherit some functionalities from the supper classes which will never been used but cannot do away with it. With regards, I do not see myself designing a projects using inheritance unless for educational purpose project that will explicitly require me to use of inheritance. I will always go for composition when designing project with multiple functionalities.

Citations:

https://en.wikipedia.org/wiki/Composition_over_inheritance

From the blog CS@Worcester – Computer Science Exploration by ioplay and used with permission of the author. All other rights reserved by the author.

Code Coverage Alone Probably Won’t Ensure Your Code is Fully Tested.

For this week’s CS-443 self-directed professional development blog entry I read a blog post written by Mark Seemann, a professional programmer/ software architect from Copenhagen Denmark. The blog post I read is entitled “Code coverage is a useless target measure,” and I found it to be quite relevant to the material we’ve been discussing in class the past couple of weeks, especially regarding path testing and data-flow testing. In this blog post, Seemann urges project managers and test developers not to just set a “code coverage goal” as a means for measuring whether their code is completely tested or not. Seemann explains that he finds this to be a sort of “perverse incentive” as it could encourage developers to write bad unit tests for the simple purpose of just covering the code as the project’s deadline approaches and the pressure on them increases. He provides some examples of what this scenario might look like using the C# programming language.

In his examples, Seemann shows that it is pretty easy to achieve 100% code coverage for a specific class, however, that doesn’t mean the code in that class is sufficiently tested for correct functionality. In his first example test, Seemann shows that it is possible to write an essentially useless test by using a try/catch block and no assertions existing solely for the purpose of covering code. Next, he gives an example of a test with an assertion that might seem like a legitimate test but Seemann shows that “[the test] doesn’t prevent regressions, or [prove] that the System Under Test works as intended.” Finally, Seemann gives a test in which he uses multiple boundary values and explains that even though it is a much better test, it hasn’t increased code coverage over the previous two tests. Hence, Seemann concludes that in order to show that software is working as it is supposed to, you need to do more than just make sure all the code is covered by unit tests, you need to write multiple tests for certain portions of code and ensure correct outputs are generated when given boundary-value inputs as well.

The reason I chose Mark’s blog post for my entry this week was because I thought it related to the material we’ve been discussing recently in class, especially data-flow testing. I think it is important for us to remember, when we are using code based testing techniques, that writing unit tests to simply cover the code are not sufficient to ensuring software is completely functional. Therefore, it’s probably a good idea to use a combination of code and specification based techniques when writing unit tests.

From the blog CS@Worcester – Caleb's Computer Science Blog by calebscomputerscienceblog and used with permission of the author. All other rights reserved by the author.

PayPal’s Interesting Design Pattern API

PayPal is a massive online American banking and currency handler that allows users to transfer funds electronically. PayPal originally started out as a company that developed security software for handheld devices. This article describes the different design guidelines through the years of developing the API. All of PayPal’s platform services have been connected through RESTful APIs. REstful is an API that uses HTTP requests to GET, PUT, POST, and DELETE data. It is commonly known as the architecture style for designing networked applications. We have learned that, through design patterns, we can optimize and organize our code in efficient ways to make later implementations of other objects easier. I chose this article because it is another look at practical implantation of design principles for large businesses.

From the article, we can see that the basic principles for PayPal’s design foundation are very similar in what we are currently implementing in our code. Some of the principles include that are discussed in the article are Coupling, encapsulation, stability, reusable, contract-based, consistency ease of use, and externalizable. Since these APIs are developed with consumer business in mind, each of the principles follow a catered idea for each. For loose coupling, the services need to be loosely coupled from each other. This makes it so that components in a network can work together, but do not heavily rely on each other. Encapsulation is also important in order to group certain attributes together. This makes it so that we can restrict direct access to certain part of an objects components. The stability principle important in that we need to make the program stable. The code should also be reusable so that it can be used across multiple different instances and by different consumers and users. This is important for team collaboration and organization as well because if multiple people can understand where an issue is occurring, it can be solved a lot faster. For contract-based functionality, it needs to be shared using a standardized service. This makes it better to create a standardization because it loops back to the reusable principle and makes accessibility more seamless. As for ease of use and consistency, these both work hand in hand because since the service needs to follow a specific set of rules and attributes, it also needs to be easy to create for consumers. The last design principle is externizable, which requires that the service can save and restore the contents of its instances.

I learned that these design principles are interesting because they seem to be a great tool to use when designing a program because it offers fundamental guidelines for optimization. I expect to apply these principles in my future practice by remembering the importance of how code should be organized to make it accessible to the consumer and to the programmer.

Source: https://www.infoq.com/news/2017/09/paypal-api-guide

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.

Acronyms to Remember

Source: https://blog.codinghorror.com/kiss-and-yagni/

This week’s reading is on KISS and YAGNI by Jeff Atwood on Coding Horror. Jeff touches upon and shows enthusiasm towards effective practices of KISS (Keep it simple, stupid!) and YAGNI (You Aren’t Gonna Need It). He mentions that just because you learned something complex that could be applied to your current project, it doesn’t mean that you should implement it when there is a more simpler option available. That is the idea of KISS. Lastly, he mentions that developers should follow YAGNI to combat the mindset of implementing solutions that you don’t need currently.

These two topics about KISS and YAGNI are very interesting because of how people are usually taught to think but when applied to programming, it doesn’t help but make things worse. Choosing to read up on this blog, allows me to see how more experienced developers feel about those who do not understand or value the two practices. Also, this is related to smelly code as it will help fight against dead code! In this case, I can feel that it is indeed a wise choice to pick-up KISS and YAGNI early on. Although, I have yet to write much programs that needed complex solutions. I do remember situations where my implementations were much longer when compared to other students, which could be a result of lesser understanding upon certain fundamental topics or lack of experience. Example being, the other student’s implementation of a loop for a solution is much shorter but does the same thing as my longer loop. It’s not exactly KISS, but it does show that sometimes things can be done in a simpler way.

Overall, understanding or knowing either practice will help save time when it comes down to getting things done. By knowing YAGNI, I will not implement things I don’t need until it is needed. By polluting the code with unused implementations, it will most likely create complications for later functions when it is used. However, if I don’t need it by then, the time spent creating the implementation for the future would be wasted time and not saved time. Then we have KISS which is rather straightforward, but should be a clear reminder that it’s when you can keep it simple, not always keep it simple. These two concepts will help me keep the code clean, and maintainable to a point where implementing newer features will not be as frustrating as when you work with smelly code.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.