Category Archives: Week 12

Journey into the Top 12 Tools, Frameworks, and Libraries for Software Development in 2018

Another blog means another step towards my journey in Software C.D.A. This blog would be a review on the blog I read “Top 12 Tools, Frameworks, and Libraries for Software Development in 2018” by Archna Oberoi.

This blog talks about what are the top 12 software tools, frameworks, and libraries happening  now in 2018. Personally the blog was extremely well written, insightful, and clearly the writer knows what she is talking about. I suggest readers interested in this topic to read the blog. I will provide a very brief summary of the main points to this blog.

In the blog “Top 12 Tools, Frameworks, and Libraries for Software Development in 2018” by Archna Oberoi. The writer mention how the following 12 tools, frameworks, and library are pretty much considered to be the best for software development in 2018. They are the following:

  1. NodeJS – “the javascript runtime framework built on Chrome V8 engine” is the best for applications that requires data input and output available for user in real time.
  2. Angularjs – “Introduced by Google in 2012, this javascript framework for front-end development is great for building Single Page Applications (SPAs).”
  3. React – “a javascript library by Facebook for building user interfaces (for web).”
  4. .NET Core – ” an open-source, next-gen .NET framework by Microsoft.”
  5. Spring – “a dependency injection framework (Inversion of Control) that assigns dependencies to the object during run-time.”
  6. Django – “an open-source framework for web app development, written in Java.”
  7. TensorFlow – “a machine learning framework by Google, meant for creating Deep Learning models.”
  8. Xamarin – “Xamarin offers an edge over the proprietary and hybrid development models as it allows developing full-fledged mobile apps using single language, i.e. C#. Moreover, Xamarin offers a class library and runtime environment, which is similar to rest of the development platforms (iPhone, Android, and Windows).”
  9. Spark – ” an open-source, micro framework, meant for creating web applications in Kotlin and Java.”
  10. Cordova – “(formly Phonegap) is a hybrid app development framework that uses HTML, CSS, and Javascript for building mobile apps.”
  11. Hadoop – “an open-source framework by Apache that stores and distributes large data sets across several servers, operating parallely.”
  12. Torch/PyTorch – ” a machine learning library for Python.”
ALL QUOTES ARE FROM THE BLOG “Top 12 Tools, Frameworks, and Libraries for Software Development in 2018” by Archna Oberoi.

 

From this blog I read I have to say that I found both TensorFlow and Xamarin framework very interesting to the point I am going to do my research on it. It also was my first time hearing about TensorFlow framework. That being said Software development has very good tool, framework, and library worth learning about especially if you aspire to be a great software developer.

Thank you for your time. This has been YessyMer in the World Of Computer Science, until next time.

From the blog cs@Worcester – YessyMer In the world of Computer Science by yesmercedes and used with permission of the author. All other rights reserved by the author.

Design Pattern Limit

The post Are Patterns like Mummies? by Michael Stal discusses the patterns from the book Design Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, otherwise known as the Gang of Four. This is the book containing the design patterns that we have been studying in class. He discusses how when the book first came out how groundbreaking it was for the software engineering community. Michael also discusses how after the Gang of Four book came out, there wasn’t anything that came after it that had quite as much impact. Michael questions whether more patterns exist, and whether they’ve been documented, or if the book contains most if not all of the worthwhile software design patterns.

The Gang of Four design patterns are basically ways to organize and structure your code in a certain way that solves certain issues you would run into and has certain advantages that can lend themselves to whatever you are doing. The author’s break down the design patterns into three different categories, which are:

  • Creational
  • Structural
  • Behavioral

Creational patterns deal with the creation of objects, structural patterns are executed through inheritance and interfaces, and behavioral patterns concern themselves with the communication between objects. There are several design patterns within each of these categories.

It is interesting to think about if the patterns covered within this book are the only design patterns (or at least the strongest) within software design. This means every large-scale program is ultimately composed of many components using only the design patterns contained within this book. Is there even a need to define more software design patterns, or can any given program or implementation issue be solved with the patterns found within the Gang of Four book?

I agree with Michael’s closing thoughts about design patterns, I think using them correctly and consistently leads to much more functional code and encourages best practices among software engineers. Whether or not there are more design patterns to uncover, or if we have reached our limit, the patterns we are aware of are still important to use to make communication between software engineers easier.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.

Boundary Value Testing, Equivalence Class Testing, Decision Table-Based Testing

Greetings reader!

This blog will give a short summary over topics that are essential in the Computer Science field: Boundary Value Testing, Equivalence Class Testing, and Decision Table- based testing.  I will be expressing my reaction to the content by sharing what I find useful and interesting. Without any further introduction, let’s begin.

Equivalent Class Testing is a black box method that can be used with all levels of testing (unit, integration, system). In this technique, the tester divides the set of test conditions into a partition. Equivalence class testing is used to reduce large numbers of test cases into clusters that are much more easy to manage. It also makes clear instructions on determining test cases without adjusting on the efficiency of testing.

Boundary value testing is the testing between extreme ends of the input values. The idea of boundary value testing is to select input values at their minimum, just above the minimum,  the nominal value, just below the maximum, and the maximum. In boundary value testing, equivalence class testing plays a huge role because boundary testing comes after the equivalence class testing. Boundary testing is used when it is  almost impossible to test large pool of test cases individually.

Decision table testing is a technique that is used to test system behavior for many different input combinations. This is a strategic approach where the different input combinations and their outputs are captured in a table format. Decision table testing is also called cause and effect testing. This testing method is important when it is necessary to test different combinations. Some advantages of decision table testing is when the system behavior is different for different inputs and not the same for a range of inputs, both equivalent testing, and boundary value testing wouldn’t help, but the decision table can be used.

Decision tables are so simple that they can be easily interpreted and used for development and also business. This table will help make effective combinations and will ensure better coverage for testing. In a case when a tester is going for 100% coverage, when the input combinations are low, this technique can typically ensure the coverage.

 

From the blog CS@Worcester – dekeh4 by dekeh4 and used with permission of the author. All other rights reserved by the author.

The Iterator Design Pattern

Ever since taking Data Structures, where we had to implement different projects and include an iterator object for each of them, I was curious about why exactly they were so pervasive. As it turned out, the Gang of Four had defined the iterator design pattern, and a helpful article on oodesign.com taught me about the motivation/intent of this pattern as well as how to apply it.

The Gang of Four classify the iterator design pattern as a behavioral pattern, as it contains functions that handle accessing the objects in a collection. The motivation behind this pattern is the ability to control and navigate through a variety of different data structures: arrays, lists, trees, stacks, queues, etc. Also, if the type of the objects within these different kinds of collections are the same, an iterator could be used to handle accessing these objects in the same way. Finally, according to the Gang of Four, “a collection should provide away to access its elements without exposing its internal structure”. This is important for security as well as providing a single access point for users.

When implementing an iterator, there should be an interface containing useful functions such as hasNext(), next(), remove(), and other methods that might be necessary. The iterator object itself is implemented as a nested class inside a collection class, so the iterator has access to the local variables and functions of the collection. Structuring it in this way allows different types of iterator objects which handle different collection types to still implement the same interface, and users will be able to process elements in different collections in your program using the same functions.

Reading this article definitely cleared up a lot of misunderstandings I had about the iterator, and definitely made me appreciate its usefulness in the context of cohesively tying together different types of data structures. This article in particular was helpful because it contained examples as well as UML diagrams, and further reading and applications that were slightly beyond my scope at this point. Still, I highly recommend it for anyone who is curious about what the iterator design pattern does.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.

Commonly Used Software Testing Strategies

In this article, The 7 Common Types of Software Testing, founder of the Simple Programmer blog John Sonmez uses his experience to describe common strategies of software testing, explains some of the benefits and liabilities of them, and how they are applied in the world of software development. Sonmez stresses how there are many other methods, and each of these are not used in isolation.

The first Software Testing method Sonmez describes is black box testing, where we are only concerned with the output of the program, and no actual code is given to the tester. The benefits of this method are simplifying test cases to input/output, and tests are from a user perspective. The downsides are the underlying reasons behind errors are not knowable, and cases can be hard to design.

Naturally, the next method the author discusses is white box testing. The pros of having the source code to test are discovering hidden bugs, optimizing code, and faster problem solving. The cons are the tester must have knowledge of programming and have access to the code, and it only works on existing code, so missing functionality may be overlooked.

Sonmez next describes specification based or acceptance testing. This methodology is where preset specifications guide the development process. The main benefit of this strategy is that errors are discovered and fixed early in development. The main drawback is the effectiveness of this method relies on well defined and complete specifications, which is time consuming to say the least.

The next strategy the author describes is automated testing and regression testing. This style of testing is designed to make sure changes in software does not cause problems, and is used where manual tests are slow and costly. Sonmez explains how vital these testing strategies are in Agile frameworks, where software is constantly added to. Automated regression tests are integral to this methodology because the same test cases have to be applied frequently, so automation is more advantageous than manual tests.

While there are more strategies Sonmez describes in this article, and many more in general, I chose to focus on the ones we discussed in class, as the author provides good context and a broad overview of these commonly used testing methodologies that further cement my understanding of these concepts and when to use them.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.

Figuring out Continuous Integration

So for this week, I have decided to read “Introduction to Continuous Integration Testing” from the TestLodge blog. The reason I have chosen to read this is because it is crucial to know the concepts of Continuous Integration and how this practice provides flexibility to development teams. It will help in understanding the workflow and why it helps developers to develop software that is cohesive even at the shortest amount of time.

This blog post goes over what is Continuous Integration, the advantages from it, and concepts before embarking Continuous Integration. Continuous Integration is an approach within software development in which the developer pushes code into a respiratory, such as Git, several times daily during the development phase. There are tools available that developers can use to set up Continuous Integration. They can be both licensed and open source and by using them, simultaneous builds can be run on multiple platforms. Once they are initiated, solutions can be built, run unit as well as functional tests, and automatically deploy to the application worked on to a server. There are some benefits such as ensuring full integrity across code and build before deployment and simple setup and configuration. These tools consist mainly of a build server and build agents. To name a few from the automatic build process, there is the Build Server, Build Configuration, Build Source path, and Build step. Depending on the requirements and size of budget available to the development team, the favoring between open source and license will go in many ways.

What I think is intriguing about this blog is that it goes out of its way to explain the automatic build parts. Usually when it comes to Continuous Integration, there would be some difficulty to have the concepts down at first before making the automatic testing. I do understand that automation does play a critical part in the process, which is why it is appreciated to have the concepts down when explaining it to others. The content of this blog has changed my way of thinking with this practice.

Based on this content of this blog, I would say this blog is a great read to understand the general ideas of Continuous Integration. I do not disagree with this content of this blog since it gives an understanding of the goals in continuous testing. For future practice, I shall try to perform load tests for projects that require response time. That way, finding code or bugs can be much faster to do.

 

Link to the blog: https://blog.testlodge.com/continuous-integration-testing/

From the blog CS@Worcester – Onwards to becoming an expert developer by dtran365 and used with permission of the author. All other rights reserved by the author.

The Abstract Factory Design Pattern

This post on DZone talks about the abstract factory design pattern and gives an example implementation in Java using geometric shapes. This pattern is similar to the simple factory with the idea of constructing objects in factories instead of just doing so in a client class. It differs in that this abstract version allows you to have an abstract factory base that allows multiple implementations for more specific versions of the same original type of object. It also differs in that you actually create an instance of a factory object instead of just creating different objects within the factory class as in the simple factory.

I like the concept of this pattern more than just having a simple class that creates multiple instances of different objects such as the simple factory. I also like how the design allows you to have multiple types of objects that can split off into different more specific types, such as how the example Java implementation has 2D shapes and 3D shape types and factories for each kind. The design appears to be efficient, especially in the implementation example, only creating a factory for a type of object when it matches a specific type in the client call. Like the other factory pattern, you can also easily implement other design patterns for the object itself such as a strategy or singleton, which further would improve the final outcome. Another aspect of this pattern that I like is that the client itself is not creating the objects, it just calls the factory get method from a provider class that sits between the factory and the client.

I definitely like this pattern and will certainly consider using it the next time I have to create a program with many different variations of the same objects such as shapes or ducks as seen in previous programming examples. It will be especially useful to use this design if I am trying to type check the objects from user input to make sure they are trying to create a valid type of object with the factory. Overall, I am finding that as I read more articles about design patterns, especially for many objects of the same base, I am gaining a better understanding of how to maximize the program efficiency with one or multiple design patterns.

Source: https://dzone.com/articles/abstract-factory-design-pattern

From the blog CS@Worcester – Chris' Computer Science Blog by cradkowski and used with permission of the author. All other rights reserved by the author.

Figuring Out What They Expected

This week I read a post of Joel Spolsky, the CEO of Stack Overflow. This post talks about the term “user model”, “program model”, and making program model conform user model in a program. User model is users’ mental understanding of what the program is doing for them. When a new user starts using a program, they do not come with a completely clean slate. They have some expectations of how the program is going to work. If they’ve used similar software before, they will think it’s going to work like that other software. If they’ve used any software before, they are going to think that your software conforms to certain common conventions. They may have intelligent guesses about how the UI is going to work. Similarly, program model that is a program’s “mental model” is encoded in bits and will be executed faithfully by the CPU. If the program model corresponds to the user model, you have a successful user interface.

For example, in Microsoft Word (and most word processors), when you put a picture in your document, the picture is actually embedded in the same file as the document itself. After inserting the picture in the document, you can delete the original picture file while the picture will remain in the document. On the contrary, HTML doesn’t let you do this, HTML documents must store their pictures in a separate file. If you take a user who is used to word processors, and doesn’t know anything about HTML, and sit them down in front of a HTML editor like FrontPage, they will almost certainly think that the picture is going to be stored in the file. This is a user model. Therefore, program user (the picture must be in a separate file) does not conform user model (the picture will be embedded)

If you’re designing a program like FrontPage, you have to create something to bring the program model in line with the user model. You have two choices: changing user model or program model. It looks remarkably hard to change the user model. You could explain things in the manual or pop up a little dialog box explaining that the image file won’t be embedded. However, they are inefficient because of annoying and users not reading them. So, the best choice is almost always going to be to change the program model, not the user model. Perhaps when they insert the picture, you could make a copy of the picture in a subdirectory beneath the document file.

You can find user model in certain circumstances by asking some users what they think is happening after you describe the situation. Then, you figure out what they expect. The popular choice is the best user model, and it’s up to you to make the program model match it. Actually, you do not have to test on too many users or have a formal usability lab. In some cases, only five or six users next to you are enough because after that, you start seeing the same results again and again, and any additional users are just a waste of time. User models aren’t very complex. When people have to guess how a program is going to work, they tend to guess simple things, rather than complicated things.

It’s hard enough to make the program model conform to the user model when the models are simple. When the models become complex, it’s even getting harder. So, you should pick the simplest possible model.

Article: https://www.joelonsoftware.com/2000/04/11/figuring-out-what-they-expected/

From the blog CS@Worcester – ThanhTruong by ttruong9 and used with permission of the author. All other rights reserved by the author.

Path Testing in Software

Hello! Today’s topic of discussion will be path testing in software development. The article that is up for discussion today is “Path Testing: The Coverage” by Jeff Nyman. So let’s get right into it. What exactly is path testing? Path testing is a method of testing that involves traversing through the code in a linear fashion to ensure that the entire program gets test coverage. The point of path testing is to make a graphs that represent your tests. This is done by graphing out your program by nodes, which are used to represent different lines of code/methods/bodies of code. The connections between the nodes represent their linear relationship. If a program graph is made correctly, you should easily be able to identify the flow of the program and things such as loops should be distinctly represented. So what’s the big deal with testing using these program graphs? When we are testing we want to make sure we test all of our code and our relationships between different parts of code. If we use nodes as these different parts of our code, we can test using the program graph layout by making sure that every node in our program graph is traversed during testing, as well as every relationship between the nodes. This is an important concept to ensure that your program gets 100% testing coverage, and to make sure that the different parts of the code work together as they properly should.

I have drafted a few different program graphs for programs in my testing class, and I have to say that they make the objective of the code entirely clear. By objective, I mean I can tell exactly what is supposed to happen in the code and the exact order of execution. Loops are entirely clear in these graphs because the are represented by a line going from one node looping back up to a node above that. If there are 2 nodes in between the first node and the node that loops up to the first node, then I know that nodes 1-4 are a loop of some sort, and that nodes 2 & 3 are some sort of method body that do some action within the loop while node 1 is the beginning of the loop and node 4 is the exit condition node. You can test the edges between the nodes to ensure the different relationships for the nodes are correct as well as test the nodes themselves to ensure they work as desired. I think that program graph testing is a great way to visualize testing and I hope to use it a lot in the future.

Here’s the link: http://testerstories.com/2014/06/path-testing-the-coverage/

 

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.

Dynamic Test Process

Source: https://www.guru99.com/dynamic-testing.html

This week’s reading is about a dynamic testing tutorial written by Radhika Renamala. It is stated to be a software testing technique where the dynamic behavior is being parsed. An example provided is a simple login page which requires input from the end user for a username and password. When the user enters in either a password or username there is an expected behavior based on the input. By comparing the actual behavior to the expected behavior, you are working with the system to find errors in the code. This article also provides a dynamic testing process in the order of test case design and implementation, test environment setup, test execution, and bug reporting. The first step is simply identifying the features to be tested and deriving test cases and conditions for them. Then setup the tests to be executed, execute the tests then document the findings. By using this method, it can reveal hidden bugs that can’t be found by static testing.

This reading was interesting because I thought it was a simple test process than what is written in this article. Personally, I thought by randomly inputting values and observing the output would be sufficient. However, the simplified steps aren’t as simple as there are necessary considerations. In the article, there is a warning given to the reader by the author that other factors should be considered before jumping into dynamic testing. Two of the most important would be time and resource as they will make or break the efficiency of running these tests. Unlike static testing as I have learned, which is more based around creating tests around the code provided by the user. This allows them to easily create tests that are clearly related to the code. But this does not allow them to think outside the box where dynamic testing allows testers to do so. This type of testing will certainly create abnormal situations that can generate a bug. As stated in the article, this is type of testing is useful for increasing the quality of your product. By reading this article, I can see why the author concluded that using both static and dynamic in conjunction with each other is a good way to properly deliver a quality product.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.