Category Archives: Week 12

Angular vs. React

When Should You Use React? React shines when you have lots of dynamic content changing within the view. Most client-side solutions on the web today struggle with rendering large lists of items within a single view. This “struggle” may be on the order of milliseconds, but in this increasingly digital world, a half a second … Continue reading Angular vs. React

From the blog cs-wsu – Kristi Pina's Blog by kpina23 and used with permission of the author. All other rights reserved by the author.

Software Architecture Patterns: Building Better Software

Hello again, readers! Today I dove into an article by Peter Wayner detailing 5 different design architecture patterns for software design and their benefits and weaknesses.

The first is Layered architecture. This is where data enters the top layer and as it passes through each layer, the layer performs a specific task. A major benefit is that each layer is maintainable, testable, can easily be assigned roles, and are easy to update and enhance. However, this can result in the source code being very messy, the code can be slow, the whole program can be hard to understand, and changing a small part could be impossible as you may need to change the whole program.

The second is Event-driven architecture. A central unit is built that accepts all data and then delegates task to separate modules to ensure that your program isn’t just waiting around for something to happen. This allows for the program to be scalable, adaptable, and easily extendable. This can lead to complexity when testing and error handling can cause troubles in development. Essentially, the more each module is dependant on the others, the more troublesome the entire program becomes.

The third architecture is Microkernel architecture. This architecture uses a set of core operations that are repeated over and over again in different patterns depending on the data given. If needed, different modules can be tacked on to allow the program to perform different functions and patterns. The difficulty with this architecture is that getting the plug-ins and microkernels to cooperate can be tricky. There is also the trouble of not being able to modify the microkernel once plug-ins start to depend on the microkernel.

The fourth architecture is Microservices Architecture. The main idea here is to build a number of different tiny programs that will handle one specific task instead of having one big program do everything. This also allows some individual programs to be scalable up to a large size while others are kept small. Some downsides are that some tasks can’t be easily split up into a single microservice. Each microservice must also be independent or the cloud can become unbalanced. Lastly, if tasks are split up amongst several microservices, the communication costs can begin to skyrocket.

The final architecture is Space Based architecture. This architecture is designed to split up processing and storage between multiple servers. This protects against collapse under a great load. Data is stored across the nodes and information stored in RAM. This does make many simple tasks quicker but can also slow down computational tasks. This architecture can also be referred to as Cloud Architecture. The main drawback with this is that with RAM databases, transactional support is difficult. Testing the entire system can be difficult as well.

This was an interesting read as it went one step higher than what we learned in class so far. This covered the upper level where multiple programs come together while in class we cover designing a single program. The Space Base architecture was quite familiar as I learned about it in a Cloud Computing course I took earlier in the year that dealt with Hadoop. The Microkernel architecture was cool as well as I personally use Eclipse to work on in-class projects and learning more about its overall architecture is something I thought I would never dive into. The Event-Driven Architecture has given me an idea for a way to work on the final project that was recently assigned to us in class. Hopefully, it works out as putting something you learned to use is always a rewarding experience.

Until next time readers. Have a wonderful day and see you next week!

From the blog CS@Worcester – Computer Science Discovery at WSU by mesitecsblog and used with permission of the author. All other rights reserved by the author.

Journey into Mocking in Testing

As I take another step towards Software Quality Assurance Testing. I decide to research I topic discuss in class the other day. The topic being Mocking in testing using mockito. For this blog I will be talking about a blog that relates to my topic of interest. It’s name is “What is Mocking in Testing?” by Piraveena Paralogarajah.

This blog was very interesting and does a good job introducing to readers what mocking is and why it is used. I suggest this blog to readers with no experience or knowledge to the software framework known as mockito and/or the mocking concept.

A brief summary about the blog “What is Mocking in Testing?” by Piraveena Paralogarajah.

What is mocking?

Mocking is the notion of making “a replica or imitation of something”.

Where and why is mocking use?

Mocking is used in unit testing. It is used on object that depends on other objects in order to isolate the object by having it mock and simulate the behavior of the original object. Mocking is done using general types of mocking frameworks. A few of these framework mentioned by the blog are the following:

  • “Proxy based ( eg: EasyMock, JMock, Mockito)”
  • “Byte code Manipulation / Classloader remapping ( eg: jMockit, PowerMock)”

What is Proxy based Mocking? How it works?

“A proxy is just an object which will be used instead of the original object. If a method of the proxy object is called than the proxy object can decide what it will do with this call:

  • delegate it to the original object
  • handles the call itself”

The blog goes on and list the limits of Proxy and the fact that “proxy does not require an instance of an interface /class if the proxy handles all method invocations itself”.

What is Classloader remapping based Mocking?

Is the notion of telling “the class loader to remap a class reference to the class file it loads”.

 

ALL QUOTES ARE FROM THE BLOG “What is Mocking in Testing?” by Piraveena Paralogarajah

I enjoy the blog and also learned that there are other types of mocking frameworks not just mockito which I will have to look into more. That being said if you take the time to read the blog I am blogging about you will get more detail information on what mocking is and why it is used and a few other mocking related details such as the general types of mocking framework.

Thank you for your time. This has been YessyMer in the World Of Computer Science, until next time.

From the blog cs@Worcester – YessyMer In the world of Computer Science by yesmercedes and used with permission of the author. All other rights reserved by the author.

Angular Testing

Angular Unit Testing

For this week’s blog post I thought I would take a look at Angular Testing considering in class we have been discussing it and are going to use it for our final project. From the site I found they go into great detail as why it is good to use angular and overall how to conduct it properly. They first start off by listing an example of code that complies properly and “works” but under a more complex situation with dozens of tests in a suite it may begin to fall apart. They go on to explain that for every run it will recompile the components and operations taking up to 75% of your time recompiling and not actually running the tests themselves. Next, they show an Angular Testbed monkey patch that will patch the testing framework resetting the testing module before each run and each test. The TestBed.resetTestingModule function they use will clean up all the overrides, modules, module factories and disposes all active fixtures as well, essentially cleaning up what you have. A _initIfNeeded function then comes into play , preserving the factories from the previous run doing what they need to do. If the flag is false the TestBed function will re-create components required for the test, creating a new zone and testing module but not recompiling anything if moduleFactory is in its correct place. After this they run the code they have, clocking in at a 24 second run time to complete its test suite. Then they apply the patch by calling a setupTestSuite function and replacing beforeEach with beforeAll causing the run time to become 8 seconds. A notice 3x time increase in time efficiency for this certain example. With this patch it allows them to preserve the compliation results and re-use them for multiple test per suite.

After this they start up Karma parallel tests, allowing for the tests to be run in parallel. This website shows all the code which is quite lengthy and I wouldn’t want to subject you all to a blog post that takes up too much space. By clicking the link below you can see all the code they have and more. All and all this website was pretty handy and interesting with what they described and how to show angular to its potential.

https://blog.angularindepth.com/angular-unit-testing-performance-34363b7345ba

From the blog CS@Worcester – Matt's Blog by mattyd99 and used with permission of the author. All other rights reserved by the author.

Mocking

In the post Mock? What, When, How? by Lovis Möller, he discusses mocking and when to use it. Mocking is a strategy used in test-driven development where a mock object is created to test the interactions and dependencies this class has with other classes. Lovis talks about looking at his own and other people’s code and determines some situations in which he would use or would not use mocking, and why he would or would not use it.

Lovis first point is that you should only mock types that are internal to your program, and avoid mocking external types. External types may have dependencies of their own that you don’t know about, and could change in a later version of the code. Avoiding mocking makes your code more adaptable for future versions. The next thing Lovis recommends is that you do not mock values. If you mock values, you aren’t actually testing any useful part of the code. Lovin states:

“Mocking is a technique that is used to make the relationships and interactions between objects visible.”

The last big example Lovis gives is to not to mock concrete classes. The example and associated code he gives makes it pretty clear, as the result depends on a method that has not been covered by the mocked methods within the code. It is easy to see how this could get out of hand if you had to covered each method of a mocked class for your mock testing.

I had a lot of trouble understanding mocking and when you would want to use and and why before reading this post. Although it is still a difficult concept to understand why we are making these mock objects, seeing circumstances in which you would or wouldn’t want to use mocking, and alternatives to mocking make it a little easier to see how it is testing a program and why it is a useful concept. The quote I selected above made this even more apparent to me. Mocking isn’t testing the implementation or your program so much as the relationships between objects and their methods and how they interact.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.

Journey into the Top 12 Tools, Frameworks, and Libraries for Software Development in 2018

Another blog means another step towards my journey in Software C.D.A. This blog would be a review on the blog I read “Top 12 Tools, Frameworks, and Libraries for Software Development in 2018” by Archna Oberoi.

This blog talks about what are the top 12 software tools, frameworks, and libraries happening  now in 2018. Personally the blog was extremely well written, insightful, and clearly the writer knows what she is talking about. I suggest readers interested in this topic to read the blog. I will provide a very brief summary of the main points to this blog.

In the blog “Top 12 Tools, Frameworks, and Libraries for Software Development in 2018” by Archna Oberoi. The writer mention how the following 12 tools, frameworks, and library are pretty much considered to be the best for software development in 2018. They are the following:

  1. NodeJS – “the javascript runtime framework built on Chrome V8 engine” is the best for applications that requires data input and output available for user in real time.
  2. Angularjs – “Introduced by Google in 2012, this javascript framework for front-end development is great for building Single Page Applications (SPAs).”
  3. React – “a javascript library by Facebook for building user interfaces (for web).”
  4. .NET Core – ” an open-source, next-gen .NET framework by Microsoft.”
  5. Spring – “a dependency injection framework (Inversion of Control) that assigns dependencies to the object during run-time.”
  6. Django – “an open-source framework for web app development, written in Java.”
  7. TensorFlow – “a machine learning framework by Google, meant for creating Deep Learning models.”
  8. Xamarin – “Xamarin offers an edge over the proprietary and hybrid development models as it allows developing full-fledged mobile apps using single language, i.e. C#. Moreover, Xamarin offers a class library and runtime environment, which is similar to rest of the development platforms (iPhone, Android, and Windows).”
  9. Spark – ” an open-source, micro framework, meant for creating web applications in Kotlin and Java.”
  10. Cordova – “(formly Phonegap) is a hybrid app development framework that uses HTML, CSS, and Javascript for building mobile apps.”
  11. Hadoop – “an open-source framework by Apache that stores and distributes large data sets across several servers, operating parallely.”
  12. Torch/PyTorch – ” a machine learning library for Python.”
ALL QUOTES ARE FROM THE BLOG “Top 12 Tools, Frameworks, and Libraries for Software Development in 2018” by Archna Oberoi.

 

From this blog I read I have to say that I found both TensorFlow and Xamarin framework very interesting to the point I am going to do my research on it. It also was my first time hearing about TensorFlow framework. That being said Software development has very good tool, framework, and library worth learning about especially if you aspire to be a great software developer.

Thank you for your time. This has been YessyMer in the World Of Computer Science, until next time.

From the blog cs@Worcester – YessyMer In the world of Computer Science by yesmercedes and used with permission of the author. All other rights reserved by the author.

Design Pattern Limit

The post Are Patterns like Mummies? by Michael Stal discusses the patterns from the book Design Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, otherwise known as the Gang of Four. This is the book containing the design patterns that we have been studying in class. He discusses how when the book first came out how groundbreaking it was for the software engineering community. Michael also discusses how after the Gang of Four book came out, there wasn’t anything that came after it that had quite as much impact. Michael questions whether more patterns exist, and whether they’ve been documented, or if the book contains most if not all of the worthwhile software design patterns.

The Gang of Four design patterns are basically ways to organize and structure your code in a certain way that solves certain issues you would run into and has certain advantages that can lend themselves to whatever you are doing. The author’s break down the design patterns into three different categories, which are:

  • Creational
  • Structural
  • Behavioral

Creational patterns deal with the creation of objects, structural patterns are executed through inheritance and interfaces, and behavioral patterns concern themselves with the communication between objects. There are several design patterns within each of these categories.

It is interesting to think about if the patterns covered within this book are the only design patterns (or at least the strongest) within software design. This means every large-scale program is ultimately composed of many components using only the design patterns contained within this book. Is there even a need to define more software design patterns, or can any given program or implementation issue be solved with the patterns found within the Gang of Four book?

I agree with Michael’s closing thoughts about design patterns, I think using them correctly and consistently leads to much more functional code and encourages best practices among software engineers. Whether or not there are more design patterns to uncover, or if we have reached our limit, the patterns we are aware of are still important to use to make communication between software engineers easier.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.

Boundary Value Testing, Equivalence Class Testing, Decision Table-Based Testing

Greetings reader!

This blog will give a short summary over topics that are essential in the Computer Science field: Boundary Value Testing, Equivalence Class Testing, and Decision Table- based testing.  I will be expressing my reaction to the content by sharing what I find useful and interesting. Without any further introduction, let’s begin.

Equivalent Class Testing is a black box method that can be used with all levels of testing (unit, integration, system). In this technique, the tester divides the set of test conditions into a partition. Equivalence class testing is used to reduce large numbers of test cases into clusters that are much more easy to manage. It also makes clear instructions on determining test cases without adjusting on the efficiency of testing.

Boundary value testing is the testing between extreme ends of the input values. The idea of boundary value testing is to select input values at their minimum, just above the minimum,  the nominal value, just below the maximum, and the maximum. In boundary value testing, equivalence class testing plays a huge role because boundary testing comes after the equivalence class testing. Boundary testing is used when it is  almost impossible to test large pool of test cases individually.

Decision table testing is a technique that is used to test system behavior for many different input combinations. This is a strategic approach where the different input combinations and their outputs are captured in a table format. Decision table testing is also called cause and effect testing. This testing method is important when it is necessary to test different combinations. Some advantages of decision table testing is when the system behavior is different for different inputs and not the same for a range of inputs, both equivalent testing, and boundary value testing wouldn’t help, but the decision table can be used.

Decision tables are so simple that they can be easily interpreted and used for development and also business. This table will help make effective combinations and will ensure better coverage for testing. In a case when a tester is going for 100% coverage, when the input combinations are low, this technique can typically ensure the coverage.

 

From the blog CS@Worcester – dekeh4 by dekeh4 and used with permission of the author. All other rights reserved by the author.

The Iterator Design Pattern

Ever since taking Data Structures, where we had to implement different projects and include an iterator object for each of them, I was curious about why exactly they were so pervasive. As it turned out, the Gang of Four had defined the iterator design pattern, and a helpful article on oodesign.com taught me about the motivation/intent of this pattern as well as how to apply it.

The Gang of Four classify the iterator design pattern as a behavioral pattern, as it contains functions that handle accessing the objects in a collection. The motivation behind this pattern is the ability to control and navigate through a variety of different data structures: arrays, lists, trees, stacks, queues, etc. Also, if the type of the objects within these different kinds of collections are the same, an iterator could be used to handle accessing these objects in the same way. Finally, according to the Gang of Four, “a collection should provide away to access its elements without exposing its internal structure”. This is important for security as well as providing a single access point for users.

When implementing an iterator, there should be an interface containing useful functions such as hasNext(), next(), remove(), and other methods that might be necessary. The iterator object itself is implemented as a nested class inside a collection class, so the iterator has access to the local variables and functions of the collection. Structuring it in this way allows different types of iterator objects which handle different collection types to still implement the same interface, and users will be able to process elements in different collections in your program using the same functions.

Reading this article definitely cleared up a lot of misunderstandings I had about the iterator, and definitely made me appreciate its usefulness in the context of cohesively tying together different types of data structures. This article in particular was helpful because it contained examples as well as UML diagrams, and further reading and applications that were slightly beyond my scope at this point. Still, I highly recommend it for anyone who is curious about what the iterator design pattern does.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.

Commonly Used Software Testing Strategies

In this article, The 7 Common Types of Software Testing, founder of the Simple Programmer blog John Sonmez uses his experience to describe common strategies of software testing, explains some of the benefits and liabilities of them, and how they are applied in the world of software development. Sonmez stresses how there are many other methods, and each of these are not used in isolation.

The first Software Testing method Sonmez describes is black box testing, where we are only concerned with the output of the program, and no actual code is given to the tester. The benefits of this method are simplifying test cases to input/output, and tests are from a user perspective. The downsides are the underlying reasons behind errors are not knowable, and cases can be hard to design.

Naturally, the next method the author discusses is white box testing. The pros of having the source code to test are discovering hidden bugs, optimizing code, and faster problem solving. The cons are the tester must have knowledge of programming and have access to the code, and it only works on existing code, so missing functionality may be overlooked.

Sonmez next describes specification based or acceptance testing. This methodology is where preset specifications guide the development process. The main benefit of this strategy is that errors are discovered and fixed early in development. The main drawback is the effectiveness of this method relies on well defined and complete specifications, which is time consuming to say the least.

The next strategy the author describes is automated testing and regression testing. This style of testing is designed to make sure changes in software does not cause problems, and is used where manual tests are slow and costly. Sonmez explains how vital these testing strategies are in Agile frameworks, where software is constantly added to. Automated regression tests are integral to this methodology because the same test cases have to be applied frequently, so automation is more advantageous than manual tests.

While there are more strategies Sonmez describes in this article, and many more in general, I chose to focus on the ones we discussed in class, as the author provides good context and a broad overview of these commonly used testing methodologies that further cement my understanding of these concepts and when to use them.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.