Category Archives: Week 5

Software Frameworks

This week i picked software frameworks. Since it’s going to be a next to topic to be discoursed in class in the future. I rather prepare my self, and get more understanding.

A software framework is a concrete or conceptual platform where common code with generic functionality can be selectively specialized or overridden by developers or users. Frameworks take the form of libraries, where a well-defined application program interface (API) is reusable anywhere within the software under development

Here are some types of software frameworks:

  • Resource Description Framework, a set of rules from the World Wide Web Consortium for how to describe any Internet resource such as a Web site and its content.
  • Internet Business Framework, a group of programs that form the technological basis for the mySAP product from SAP, the German company that markets an enterprise resource management line of products
  • Sender Policy Framework, a defined approach and programming for making e-mail more secure
  • Zachman framework, a logical structure intended to provide a comprehensive representation of an information technology enterprise that is independent of the tools and methods used in any particular IT business

Using a framework is not really any different from classic OOP programming.

When you write projects in a similar environment, you will probably see yourself writing a framework (or a set of tools) over and over again.

A framework is really just code reuse – instead of you writing the logic for managing a common task, someone else (or you) has written it already for you to use in your project.

A well designed framework will keep you focused on your task, rather than spending time solving problems that has been solved already.

Frameworks of all kinds are extremely important nowadays, because of the time factor. When building something you will need to invest a lot of your time in building the logic for your application – and you don’t want to be forced to program any kind of low-level functionality. Software frameworks do that, they take care of the low-level stuff for you.

There are some this disadvantages :

  • Creating a framework is difficult and time-consuming (i.e. expensive).
  • The learning curve for a new framework can be steep.
  • Over time, a framework can become increasingly complex.

But these disavantages , i think its the best way to go.

From this topic i learned a framework is a code reuse, extremely important for programmer for it to take care of the low-level stuff. This will also help me develop better codes and in a fast pace .  I really hope this help all the students taking this cs-343 class for better understanding in the future.

 

links or reference :: https://www.techopedia.com/definition/14384/software-framework ,

http://whatis.techtarget.com/definition/framework

From the blog CS@worcester – Site Title by Derek Odame and used with permission of the author. All other rights reserved by the author.

Levels of Testing

Link to blog: https://blog.testlodge.com/levels-of-testing/

Before software is released and used, it has to be tested so that there are no flaws within its specification or function. In this blog by Jake Bartlett, he explains the stages or “levels” of testing that are completed prior to the release and use of software. These levels include Unit Testing, Integration Testing, System Testing, and Acceptance Testing.

Unit Testing: The first of level of testing is unit testing, which is the most micro-level of testing. It involves testing individual pieces of code to make sure each part or unit is correct. A unit is a specific piece of functionality, a program, or a certain procedure within an application. This type of testing verifies the internal design, internal logic, internal paths, and error handling.

Integration Testing: This level of testing comes after unit testing. Integration testing tests how the units work together. Individual units are combined and tested as a group. This overall process ensures that the application runs efficiently by thoroughly dissecting and analyzing how each each unit of code performs with one another. The three techniques to effectively conduct integration testing are Big Bang Testing, Top Down Approach, and Bottom Up Approach.

Big Bang Testing involves testing the entire code along with each group of components simultaneously. The downside to this technique is that since it tests the entire code altogether at one time, it makes it hard to identify the main cause of a problem if there is one.

The Top Down Approach tests the top units of the code and moves down to the lower set of codes in that sequence.

The Bottom Up Approach tests the bottom units first and moves up to the high set of codes in that sequence. Basically, it is the reversal of the Top Down Approach.

System Testing: This type of testing requires the entire application. It is a series of tests in order to test the application end-to-end and verifies the technical, functional, and business requirements of the software. This level is the last level of testing before the user tests the application.

Acceptance Testing:  This is the final level testing which determines whether or not the software is ready to be released and used. Acceptance testing should be done by the business user or end user.

I chose this blog on levels of testing because I wanted to know more about each levels. I had the basic concepts of certain types of testing that were discussed in my software testing class, however these terms such as system testing, and acceptance testing were the ones where I wanted to know more about. Bartlett highlighted the important aspects about each of the four levels of testing, which made me conceptually understand them a lot better. Understanding these levels of testing is important because as a future Video Game Developer, I will have to undergo many types of tests to efficiently test the software that I’d produce before releasing it. It is essential that I my tests allow my applications to run successfully.

From the blog CS@Worcester – Ricky Phan by Ricky Phan CS Worcester and used with permission of the author. All other rights reserved by the author.

SOLID principles

This week I read a blog on SOLID principles. I believe using SOLID principles in the software design process will guide me in the creation of clean and robust code.

There are many design principles out there, but at the basic level, there are five principles which are abbreviated as the SOLID principles.

S = Single Responsibility Principle

O = Opened Closed Principle

L = Liscov Substitution Principle

I = Interface Segregation Principle

D = Dependency Inversion Principle

From the blog CS@Worcester – Not just another CS blog by osworup007 and used with permission of the author. All other rights reserved by the author.

Software Architectural Patterns

Link to blog: https://medium.com/towards-data-science/software-architecture-patterns-98043af8028

In this blog by Anuradha Wickramarachchi, he highlights the different layers of software architecture. These include the Presentation Layer, Business Layer, Persistent layer, and Database Layer. He also describes that each of these layers contain several “components” such as open and closed layers. Each layer is described as follows:

Presentation Layer: The presentation layer presents and displays web pages,  UI forms and end user interacting API’s.

Business Layer: The business layer contain the logic behind the accessibility, security and authentication procedures. These include the Enterprise Service Buses, middle ware, and other request interceptors to perform validations.

Persistent Layer: The persistent layer is the presentation layer for data which includes the Data Access Object presentation (DAO), Object Relational Mappings (ORM), and other modes of data presentation in the application level. All of these types of data presentation reveals persistent data within the RAM.

Database Layer: The database layer provides simple databases expanding up to Storage Area Networks (SANs).

Components of these layers contain open and closed layers.  According to Wickramarachchi, open layers allow the systems to bypass layers and hit a layer below. This is done in critical systems where latency can cost a lot. At times, it is reasonable to bypass layers and directly seek data from the right layer. Within the closed layers, they reveal the concept of Layers of Isolation which separates each layer in a strict manner. This allows only a sequential pass through of layers without a bypassing procedure. Layers of Isolation enforces better decoupling of layers which makes the system more viable to changes.

I chose this blog because I wanted to know more about about Software Architectures and its layers. I knew briefly that within software architectures, they’d contain multiple layers that performed a number of tasks and jobs, and each layer differed from each other. One new thing that I learned from reading this blog was the Layers of Isolation. It was my first time seeing that terminology. I thought that it was interesting that the four layers of software architecture would contain other “components” in which Wickramarachchi explains as well as Opened and Closed layers.

I felt that Wickramarachchi was well explained and was very brief into the concepts I wanted to understand. He highlighted the main aspects of each layer without going overboard on extra content which helped understand the concepts further. Since I didn’t have a previous well understanding on software architectures, this blog clarified the fundamentals of software architectures that I wanted to understand.

 

 

 

From the blog CS@Worcester – Ricky Phan by Ricky Phan CS Worcester and used with permission of the author. All other rights reserved by the author.

10/16/2017 -Blog Assignment Week 5 CS 343

https://airbrake.io/blog/design-patterns/factory
The post this week hinges on factory pattern method. Similar to simple factory, factory method revolves around the concept of a factory. An important difference is that factory methods provides a simpler way to further abstract the underlying class. As further explained in the article, like factories, the code should make use of an intermediary factory class, which provides for an easier way to rapidly produce objects for the client. The main benefit as explained is that the factory should “take care of the work for us”, meaning that we do not have to care about what happens behind the scene, we can just want to use the codes.

In order to explore a real world example of implementing a factory method, the article explores the relationship between authors and publishers. As we all know, there are many different types of authors. For example, those that specializes in fiction or nonfiction. Similarly, different publishers prefer authors that specializes in certain fields and styles of writing. An example of a publisher is a newspaper. A newspaper is a type of publisher that focuses on publishing nonfiction authors.

In the example, the publisher acts as a factory method, where it always have some tasks that remains the same. The baseline concept is that it acts as a factory to abstract and separate the different types of publishers from the different types of authors. The main goal is to separate the process of hiring the type of author from the particular type of publisher. The code starts off with a basic IAuthor interface, which contains the Write() method. From there two unique types of authors are used FictionAuthor and NonfictionAuthor. Both contains the Write() method. The publisher class contains the core component of the factory method pattern that is it contains a HireAuthor() method and a Publish() method.
The convenience of the factory pattern method in this case is that implementations of the Blog.HireAuthor() and Newspaper.HireAuthor() methods can be different. All in all, the client can freely create a blog and newspaper by issuing the Publish() command without knowing the internal workings of the factory method. The result is automatic instantiations of the appropriate types of IAuthors, which implies that it instantiates the correct type of author for what the type of publication had expected. The Publisher class furthermore adheres to the open/closed principle so that it can be easily expanded without affecting the other internal codes. The main idea here is that any inherited classes from Publisher can be referenced without having knowledge of how the authorship or the writing process works behind the scenes.
I chose this article because the example outlines the advantages and disadvantages of factory pattern methods. One advantage based on the example is that it encourages consistency in code that whenever an object is created it uses Factory instead of different constructors at different client side. Another advantage is that it enables subclasses to to have extended versions of an object. Therefore, creating objects inside factory is more flexible than creating one directly in the client. Finally, factory design makes it easier to debug and troubleshoot the code since it centralizes object creation and every client is receiving the object from the same place. The main disadvantage that I see from the example is that it can complicate the code when it is unnecessary. That is why I chose this topic this week in order to analyze the advantages and disadvantages of factory pattern.

From the blog CS@Worcester – Site Title by myxuanonline and used with permission of the author. All other rights reserved by the author.

Non-Functional Testing

For this post I chose the article “What is Non Functional Testing?” on Software Testing Help’s website. I chose this article because I like the content on this site that I have read previously and I often forget the difference between functional and non-functional testing. I’m hoping by covering it in a blog it will help commit it to memory and also give me my own quick reference if I need it in the future.

To start it’s important to remember the two broadest types of testing are functional and non-functional. Non-functional testing in a general sense addresses things like application performance under normal circumstances, the security of an application, disaster recovery of an application, and a lot more. These types of testing are just as important as meeting requirement of any application. They are what contribute to the quality of an application.

To follow are the most popular non-functional techniques as a quick reference and a quick explanation:

  1. Performance Testing: Overall performance of a system. (meets expected response time)
  2. Load Testing: System performance under normal and expected conditions. (test concurrent users)
  3. Stress testing: System performance when it’s low on resources. (low memory or disk space, max out)
  4. Volume Testing: Behavior with large amounts of data. (Max database out and query, check limit of data failure)
  5. Usability Testing: Evaluate system for human use. (ease of use, correct/expected outputs)
  6. User Interface Testing: Evaluate GUI. (Consistent for its look, easy to use, page traversals)
  7. Compatibility Testing: Checks if application can be used with other configurations. (different browsers)
  8. Recovery Testing: Evaluates for proper termination and data recovery after failure. (loss of power, invalid pointer)
  9. Instability Testing: Evaluates install/uninstall success. (correct system components, updating existing installation)
  10. Documentation Testing: Evaluates docs and user manuals. (document availability, accuracy)

In conclusion this covers a good portion of the main types of non-functional testing. This will really just serve as a quick reference or lookup to remind me of the different types of testing that categorize as non-functional. This isn’t changing the way I code but it has reminded me the importance of non-functional testing. Just meeting the requirements during the development of an application does not ensure you will output something with high quality. I would argue that non-functional testing responsibility falls more on the developers to know what to do more than the client. A client requesting requirements for an application likely will not even think of a lot of the testing types mentioned above. I think it’s important for the developers to openly communicate with clients about non-functional testing so that they can come up with the best testing plan together.

Overall this was another good article on Software Testing Help. It was exactly the details I needed and nothing more. Looking ahead I might as well do a similar blog on functional testing to complete my own testing type’s reference.

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

Singleton Pattern Revisited

For this post I chose the article “Singleton Design Pattern” written by the team at Source Making. I chose this article for two reasons: 1) Source Making is a good resource that covers topics such as design patterns, antipatterns, UML, and refactoring. 2) While we covered the Singleton design pattern in class, I felt like I needed to take a look at it again from another source.

To start, the article touches on what the intent of the Singleton pattern is. First, it ensures that there is only once instance of a class with a global point of access. Second, it uses encapsulation in the sense of initializing on first use.

To use the Singleton pattern you need to make the single instance objects class be able to create, initialize, and enforce. The instance itself must be a private and static type. Next you need a function that encapsulates the initialization and provides access to the instance. This function also needs to be declared public static. When a user needs to reference the single instance they will call the accessor function (getter).

Additionally there are three criteria which must be met:

  1. Ownership of the single instance can’t be reasonably assigned.
  2. Lazy initialization is desirable. (delayed creation)
  3. Global access is not otherwise provided for.

The author makes some additional remarks about the Singleton pattern. He mentions that this pattern is one of the most widely misused patterns among developers. One of the most common mistakes is attempting to replace global variables with Singletons. One advantage he mentions is that you can be absolutely sure that you have only one instance however he also points out that most of the time it is unnecessary. He also advises to always find the right balance of exposure and protection for an object to allow for flexibility. Using a Singleton however can lead to not thinking carefully about an objects visibility.

After reading the article I definitely have a better understanding of how the Singleton pattern works and why I would use it. After reviewing the Duck Simulator slides from class and seeing some additional information in this article, I have a good grasp on the concept now. I think the most interesting concept of the Singleton pattern is the concept of lazy initialization. I like the idea of no instance being created until it is actually needed. After reading this article I would give it a “C” for content. Had I not been exposed to the Singleton pattern in class, this article would not have been much use to me. But because I already had a basic idea of the pattern, the article just helped reinforce the concepts and provide some more examples.

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

Unit Testing: JUnit

Since for the second part of our “Software Quality Assur & Test” class, we are now beginning to test object-oriented software, I thought it would be useful for me to expand my knowledge of horizon on Junit. The article I read this week is about unit testing with JUnit 4 and JUnit5. It explains the creation of JUnit tests. It also covers the usage of the Eclipse IDE for developing software tests. In this blog I will be centered around JUnit 4 and the JUnit topics that I found useful for both my current software testing course and my professional career as well.

Define a test: To define that a certain method is a test method, annotate it with the @Test annotation. This method executes the code under test. Use an assert method, provided by JUnit to check an expected result versus the actual result.

Naming conventions: As a general rule, a test name should explain what the test does. If that is done correctly, reading the actual implementation can be avoided.

I will be naming my test methods as logically as I can, so that not only I know what exactly the test does, but also be easier for my team member to not to dig into the actual code, thereby saving the time, while working on group. Moreover, I will also avoid using testcase naming convention which uses class names and method names for testcases name.

Test execution order: JUnit assumes that all test methods can be executed in an arbitrary order. Well-written test code should not assume any order, i.e., tests should not depend on other tests.

Most of the times while writing my test cases, I used to think about the bigger picture. I used to scan through the entire project’s code, making sure that I know the relations and dependencies among the classes. This way, often, I ended up writing test cases that must be executed in a particular order. But now I will remember that tests should be independent, and test only one code unit at a time. I will try to make each test independent to all the others.

Defining test methods: JUnit uses annotations to mark methods as test methods and to configure them such as:

@Test, @Before, @After, @BeforeClass, @AfterClass, @Ignore , @Test (expected = Exception.class), @Test(timeout=100).

Assert statements: JUnit provides static methods to test for certain conditions via the Assertclass. These assert statements typically start with assert. They allow us to specify the error message, the expected and the actual result. An assertion method compares the actual value returned by a test to the expected value. It throws an AssertionException if the comparison fails. Most common methods include:

fail([message]), assertTrue([message,] boolean condition), assertFalse([message,] boolean condition), assertEquals([message,] expected, actual), assertNotEquals([message], expected, actual), assertNull([message], object-reference).

Form now onwards, while writing my asserts I will provide meaningful message in assert statements that will makes it easier later on to identify what exactly happened and fix the problem, if any error occurred.

For my next week I am looking forward in learning more about testing for exceptions and the use of assertThat statement.

Source: (http://www.vogella.com/tutorials/JUnit/article.html)

 

 

 

From the blog CS@Worcester – Not just another CS blog by osworup007 and used with permission of the author. All other rights reserved by the author.

10/16/2017 – blog Assignment Week 5 CS 443

http://reqtest.com/testing-blog/white-box-testing-example/
This week we generalize to whitebox testing. Whitebox testing or code based testing as the name implies works at the code based level. This technique does not rely on the specifications but instead provides the programmer with the actual code. Armed with such technical details, programmers can create test cases to test for the success of the system. The key principles to successful testing are the following:

Statement coverage – the simplest type of coverage ensures that every statement is executed.
Branch coverage which ensures that every branch is covered.
Finally, path coverage which ensures that all paths are tested.

Statement and branch coverage does not guarantee full edge coverage. So, above all path coverage is favoured for its comprehensiveness.
For code testing, I have always asked if white-box testing is enough to create a successful, working product when the tester already has the code. What are the advantages of black box testing when whitebox testing should be sufficient?
This article is all about whitebox testing. It poses the question of which is better white box or black box testing, but does to formulate any favoritism for either. Both has advantages/disadvantages depending on the scenario, so neither can be ruled out over the other. As stated in the article, black box testing allows the system to be tested from a user’s point of view. White box testing on the other hand allows the system to be tested from a developer’s point of view.
Black box testing allows the tester to have more perspective on the intended customer/users and tests for the expected results. For new insights, the author seems to favor it during the early stages of product development and the first few sprints in the release. It allows for further progress and development after eliminating “show stopper” bugs. However, from what I have seen black box testing is redundant and consumes too much time. One disadvantage is that test cases are extremely difficult to design when the specifications are unclear and not concise. One advantage is that it can test for boundary conditions.
White Box testing on the other hand, allows the tester to see the code. Therefore, it helps to bring out bugs that would otherwise be missed with black box testing. White box testing as stated helps to fix journeys and scenarios that would have otherwise been considered as exceptions, but that can be damaging in real life in terms of reputational, regulatory, and monetary damages. It allows for code optimizations by revealing hidden bugs. The emphasis on it is that it allows engineering teams to conduct thorough testing of the application by allowing for all possible paths to be covered. So, in my opinion whitebox testing should be given higher weights

I chose this article for generalization and out of interest to learn more about whitebox testing. Although the article does not show favoritism, I am in favor of most of the techniques over black box testing.

From the blog CS@Worcester – Site Title by myxuanonline and used with permission of the author. All other rights reserved by the author.

Abstract Factories

http://www.oodesign.com/factory-pattern.html

http://www.oodesign.com/factory-method-pattern.html

http://www.oodesign.com/abstract-factory-pattern.html

Last week, on my blog, I discussed simple and static factories briefly.  That post, however; only talked about some of what factories can do.  This week, to round off my knowledge, I choose to reaffirm learn more about abstract factories.  The best resource I found to help me with the topic was oodesign.com (Object oriented design).  Above I’ve linked all three of their pages on factories but I’ll mostly be concerning myself with the last one, abstract factories.  For the most part, I can earnestly say I didn’t know much about abstract factories.  I’m not the most well-read developer, yet.  But immediately they seemed like an impressive tool.

From my readings and class lectures this week, I know that Factory Method pattern uses an interface to create objects while allowing for subclasses to decide the type of object.  I learned though, that abstract factories were an extension of this functionality.  They are essentially a factory of factories that allows us to take advantage of the “code to an interface” principle.

I can immediately recognize that abstract factories prove a good practice to code.  In the pages, I saw that every subclass had their own factory classes written for them (meaning that factory method pattern and abstract factories work well together).    Subclasses can be added to an interface, so long as they are compatible (in the same family of objects).  This would seem to promote easier refactoring and extending functionalities of programs.  And, as a developer, I assume that any program I write will need to be expanded or edited.  If abstract factories truly do make it easier on that front, then I’d be more than willing to use them.  Unfortunately, I do have a concern relating to their extendibility.

They aren’t entirely flexible, that is, an abstract factory can’t help creating in an object that isn’t in the same family.  The added abstraction and encapsulation seems like it would start to become overly complex or cluttered for it to be easily read by humans.  With every interface having multiple subclasses and factories, any UML diagram or visual would start to get cluttered or cumbersome if a program has multiple interfaces that create families of objects.  Making the code more efficient and organized doesn’t mean we’re making it easier to read.  We should remember that it’s humans that will edit our work after all. Though, since it seems like factories in general are prevalent, I’m sure I’ll become practiced enough that they aren’t daunting.  I just don’t want to be the guy who makes someone else’s day difficult because his code is hard to read.

Using abstract factories also avoids using conditional logic, it seems.  The usual factory design pattern would use if statements or switches to decide which subclass to return but, as I said before, each subclass has its own factory.  The abstract factory then returns the subclass depending on the input factory class.  Conditional statements aren’t necessarily difficult to write and understand.  The appeal to avoiding them, to me, is avoiding certain errors all together.  I have less to worry if less of my code can throw an error.  Also, if the program were of a larger size, there may be a so many conditions that writing a case or if statement for everyone would become painstaking.  Being able to avoid errors and making code easier to write is an attractive feature.

Just like the other factory patterns, I can see there is a place for abstract patterns in the workplace.  Now I just want to be sure I know when and how to use them.  I’ll certainly be practicing with them.

From the blog CS@Worcester – W.I.P. (Something catchy) by aguillardcsblog and used with permission of the author. All other rights reserved by the author.