Flyweight Design Pattern

Today blog is about Flyweight Design Pattern. Flyweight design pattern a structural design pattern like the adapter pattern that we learned in class and decorator pattern in home work two. Flyweight design pattern is applied in a situation when we need to instantiate a large number of objects of a class. It reduces the amount of memory consumed and execution time by sharing objects that similar in some way. Flyweight objects can’t be modified once they have been constructed which means in short, it’s immutable. HashMap is used in flyweight pattern to reference of the created objects. The Object property are also divided into two properties intrinsic and extrinsic. “Intrinsic properties make the object unique whereas extrinsic properties are set by client code and used to perform different operations.”. An example of Intrinsic state is that if there a Shape object that already created we don’t have to create another color feature because every shape has a color related instead we can just use the already created object. The extrinsic state is the size of the shape which is different for many objects. To return the shared objects we have to create a flyweight factory which is used by the client programs to create the object. All in all, this design pattern is used to speed up the speed of the program by sharing objects instead of creating new ones. I think this design pattern is complex. I don’t see this pattern used often unless the program is creating insanely number of objects. During this tutorial, I learned a lot about this design pattern, they used a shape example which shows the result of flyweight design pattern. I thought this was interesting, but I couldn’t think of many other situations that this design pattern can applied to. The tutorial was straight forward so I didn’t disagree with everything. This design pattern is complex so the I had trouble understanding it but when I actually implemented and ran the code I had a better understanding of this design pattern. This content did change the way I think because I now know there is many ways to increase the performance of my code.

 

https://javapapers.com/design-patterns/flyweight-design-pattern/

From the blog CS@Worcester – Phan's CS by phancs and used with permission of the author. All other rights reserved by the author.

Measuring Productivity

Having taken multiple team-based software development classes, I’ve always wondered how software development teams measure and record their productivity levels over the course of an entire project. Are there certain metrics to use in order to measure such productivity over time? Is this metric universal to all software development projects, or does it vary depending on the purpose of the software?

To research the answer a bit further,  I read “Measuring Developer Productivity” by experienced developer, Max Kanat-Alexander. In this blog post, he begins at the root of what productivity is; essentially, it is the making of a certain product, a process which can be deemed effective when it takes the least amount of time and materials to complete. He continues by explaining that,

The key to understanding what productivity is is realizing that it has to do with products. A person who is productive is a person who regularly and efficiently produces products. – Max Kanat-Alexander

This clarifies the definition of what productivity is and thus allows us to understand that productivity must be measured over an increment of time. A productive team is one that produces developed software that fits all of the necessary criteria in a time efficient manner. Now, do we count developer productivity in keyboard clicks? Lines of code? Neither. Kanat-Alexander outlines his point as he explains that,

The point here is that “computer programmer,” like “carpenter,” is a skill, not a job. You don’t measure the practice of a skill if you want to know how much a person is producing. You measure something about the product that that skill produces. – Max Kanat-Alexander

I found this point in particular to be incredibly interesting to consider. Instead of being measured by a very small and insignificant portion of the job, the metric necessary is project/job dependent. How many tasks of your job can you complete in a single day? In a single hour? If you are able to complete many small tasks, then you are very productive. Even if you are able to complete one very large task, then that is also considered productive. The way productivity should be measured is by the weight and size of the task and how many you are able to finish to an effective working level each day of development. With this metric, it is easier for managers to grasp how productive their software developers really are.

From the blog CS@Worcester – Fall 2018 Software Discoveries by softwarediscoveries and used with permission of the author. All other rights reserved by the author.

Learning about Test-Driven Development

So for this week, I have decided to read “What is Test-Driven Development?” from the Rainforest blog. The reason I have chosen this blog is because from what I understand of test-driven development, it is hard to apply in practice and requires a lot of time when doing this process, the first time. This will help me in understanding the advantages and disadvantages of using this process.

For this blog post, it goes over what the benefits of test-driven development are, who needs the development in question, how does it work, and the disadvantages of using it. Test-driven development is a software process that follows a short, repetitive, and continuous cycle of creating unique cases for companies want in their applications. Unlike traditional software testing, test-driven development implements testing before and during development. The benefits provided in this process are quickly sending quality code in production, efficiency building coverage on the building’s application, and reducing resources required for testing. This development is good for teams for fast release cycles but still want to ensure their customers are receiving quality results and teams with little practice in-house QA practices instilled but still value quality. The disadvantages are product and development teams must be in lock step, difficulty maintaining transparency about changes, and initially time sensitive.

What I think is interesting about this content is it does give scenarios for each part to express the process and why companies would use this development. From the blog, it breaks down to the meaning of the titles, giving the details of the scenarios, and finally the charts to give the idea of the process when working together. The content has changed my way of thinking in understanding the process and not believe that it is very difficult in introducing this process to a person that is learning programming the first time.

Based on the content of this blog, I would say this is straightforward to understand once going over the fundamentals a few times. I do not disagree with this content given by this blog because it helped me understand the idea of test-driven development with the short descriptions and charts that show it. For future practice, I shall try to refer this process when teaching those who have difficulty with programming.

Link to the blog: https://www.rainforestqa.com/blog/2018-08-29-test-driven-development/

 

From the blog CS@Worcester – Onwards to becoming an expert developer by dtran365 and used with permission of the author. All other rights reserved by the author.

Load Testing

https://reqtest.com/testing-blog/load-testing/

This blog post is an in-depth look at load testing. Load testing is a type of performance testing that identifies the limits to the operating capacity of a program. It is usually used to test how many users an application can support and whether the infrastructure that is being used can support it. Some examples of parameters that load testing can identify include response time, performance under different load conditions of system or database components, network delay, hardware limitations, and issues in software configuration.

Load testing is similar to stress testing, but the two are not the same. Load testing tests a system under peak traffic, while stress testing tests system behavior beyond peak conditions and the response of the system when it returns to normal load conditions. The advantages of load testing include improved scalability, reduction in system downtime, reduction in cost due to failures, and improved customer satisfaction. Some examples of load testing are:

  • Testing a printer by printing a large number of documents
  • Testing a mail server with many concurrent users
  • Testing a word processor by making a change in a large volume of data

The most popular load testing tools are LoadView, NeoLoad, WebLOAD, and Load Runner. However, load testing can also be done without any additional tools. The steps to do so are:

  • Create a dedicated test environment
  • Determine load test scenarios
  • Determine load testing transactions for the application
  • Execute the tests and monitor the results
  • Analyze the results
  • Refine the system and retest

This blog post did a good job at defining load testing and explaining why it is an important process. I had heard of stress testing previously, so I appreciated how the author described the differences between the two. The part that I thought was most useful was the section on the advantages of load testing. It makes it easy to see why load testing is important if you are building something like a web app. It was also useful to read about some of the most popular tools because I now have a reference if I ever need to do this kind of testing in the future. I’m glad I read this blog as load testing seems like a very important practice in the field of software testing.

From the blog CS@Worcester – Computer Science Blog by rydercsblog and used with permission of the author. All other rights reserved by the author.

WSU Blog #5 for CS-443

URL: http://writingtestcases.blogspot.com/2013/07/decision-table-testing-tech…

Decision Table Testing Technique and Examples

The example in the blog provides a great step by step method for potentially handling a Decision Table based testing.

It provides that a video for executing this particular method. The step by step technique is outlined very well in the video. 

From the blog Rick W Phillips - CS@Worcester by rickwphillips and used with permission of the author. All other rights reserved by the author.

Detecting fake news at its source

http://news.mit.edu/2018/mit-csail-machine-learning-system-detects-fake-news-from-source-1004

This is an interesting article about how researchers are trying to make a program that will decide wether or not a news soucre is reliable. This program uses machine learning and scrapes data about a site to makes its determination and it only needs about 150 articles to reliably detect if a source is accurate or not. Researchers first took data from mediabiasfactcheck.com, a site that has human fact checkers that analyze the accuracy of over 2,000 news sites. They then fed that data into their algorithm to teach it how to classify news sites into high, medium or low levels of factuality. As of now, the system was 65 percent accurate at detecting these levels of factuality and 70 percent accurate at deciding if a source is left-leaning, right-leaning or moderate. The researchers determined that the best way to detect fake news were to look at the language used in a sources stories. Fake news sources were likely to use language that is hyperbolic, subjective and emotional. This system was also able to read wikipedia pages on sources that were fake news and noticed that those wikipidia pages contained an abnormal amount of words like extreme or conspiracy theory, even making correlations with the strcuture of a sources URL, sources with lots of special characters and complicated subdirectories were associated with being less reliable.

I think this is noteworthy because of how important accurate information is on the internet. There are a large number of people that spread misinformation on social media and influence the behaviors and thoughts of their readers. There are constantly news stories on bots from Russia or other countries trying to spread misinformation to not only the United States but anyone they consider a threat. The spreading of this fake news can cause a lot of problems and pose a serious issue for the information age to the point that I have heard people refer to our current time as the “misinformation age”. If automated software is able to detect fake news at a near perfect accuracy it will help the entire planet in combating such a large and seemingly unbeatable problem. Something that takes hundreds of fact checkers could take a program a few minutes and it could be accesible to the general public for all of thier needs. Although the algorithm in its current for is not accurate enough, it is a step forward and a look at a possible solution that we desperately need.

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.

Testing: Artificial Intelligence

The podcast that I listened to this week was very enjoyable by the one and only Joe Colantonio from Test Talks. He talks to Angie Jones about Artificial Intelligence and Testing. Angie is a senior software engineer test at twitter who has developed automation strategies and frameworks for countless software products. What Angie and Joe talk about is the reality of testing in the artificial world. Angie has noticed that she has seen a lot of tools helping with artificial intelligence and how artificial intelligence will save the day. What is missing from the conversation, Angie highlights, is how to test these forms of Artificial intelligence that is present today, like machine learning which many applications are using it. You do not realize how many things use machine learning like Netflix, and Twitter. Both of these has features that use machine learning. Many see it has something that is in the future but that is wrong. They are present and will be even more prevalent in the future. There needs to be more focus on testing this stuff. People also do not see this as a testable feature, and do not worry about testing it. The thing though is that you think that it my be learning one thing, while it may be tricking you and learning something completely different. Some of the lessons she learned from testing AI is that it challenges a lot of guidelines, especially around automation. There is not exactness or preciseness. There are a range of values that could be valid and Angie did have to get creative and dig deeper down to figure out what was correct. She also learned that if the tester was not a programmer, which is a huge debate whether a tester should be a programmer, she had to test a lot of algorithms and know how to code to test them. Lastly the one piece of advice that Angie would give to the audience is that dont let anyone make you believe that AI and machine learning is this all-knowing black box, and a testers we should know this. Read more into what machine learning is, the different applications used in machine learning, and to take the time to think about how you would test something. The podcast for this week was very interesting. Again, this was a podcast that talked about issues that are going on now. AI and machine learning is something that is becoming very popular and will not do anything but get even more popular. With it growing and growing, it is as Angie said, very important now, and even more vital in the future to figure out how to test it and the correct way.

From the blog mrogers4836 by mrogers4836 and used with permission of the author. All other rights reserved by the author.

Black Box Testing vs White Box Testing

Today blog is about Black Box Testing vs White Box Testing. First, Black Box Testing is when we’re testing if a system is functional or non-functional without knowing the internal structure (code). There’re many techniques that can be used for designing black box tests like Equivalence Partitioning, Boundary Value Analysis, and Cause-Effect Graphing. All these techniques are similar as they all are testing input values with valid output values. There are also many advantages with Black Box Testing. First, the tester doesn’t really have to be a programmer or need to know how the program is implemented. Second, the test is using done more hands-on and with user’s point of view which allows reviews of the product to be more unique in a way that every tester/user have a different opinion on the product. There are also disadvantages for Black Box testing. First, there are not a lot of many inputs that can be tested which means there is still many of the area of the product that are left untested. Second, without the specifications the test cases are difficult to design. Also, testes can be redundant because of the lack of test. An example I think is good for Black Box testing like tester testing an app by using it and checking if every action works as it should.

White Box testing is when you have full access to the information of the product and test the internal design. It tests the input of the test’s cases with the expected output.  A great example of White Box Testing is “like the work of a mechanic who examines the engine to see why the car is not moving”. White Box testing is usually applied to unit testing, but I think it’s the most effective method because you have all the specifications and most all part of the product will be cover. The disadvantages are that it requires skilled developers because some tests are complex.

All in all, both Black Box testing and White Box testing are both effective in their own way. Black Box testing is mainly for Acceptance testing and White Box testing is for unit testing. I like White box testing more because I full access to the code which means I can better understand the mechanics of the system.

http://softwaretestingfundamentals.com/differences-between-black-box-testing-and-white-box-testing/

From the blog CS@Worcester – Phan's CS by phancs and used with permission of the author. All other rights reserved by the author.

Journey into Unit Testing with Test Driven Development

As I take my first step towards my journey in Software Quality Assurance and Testing I dive into Unit Testing. After searching the web found a really good podcast named “Unit Testing With Test Driven Development” by Beej Burns this podcast is about Unit Testing and focuses most on Test-Driven Development (TDD). I will be using this podcast to help me write this blog.

In the podcast they had two guess John and Clayton. They went on the podcast and talked about their book ‘Practical Test-Driven Development Using C# 7: Unleash the power of TDD by implementing real world example under .NET environment and JavaScript’. I personally have not read this book. According to the podcast this book is meant for Software Developers with a basic knowledge of TDD. This book is intended for those who wish to understand the benefit of TDD. This book is beneficial for individual who know C# basic since all the examples are in C#.

The Following Q&A are the Question asked in the podcast and the guest answer. The answer I am writing are summarized in my own words but originally derived from the guest on the podcast. Also I am not doing all the Q&A just the ones I found interesting and liked how the guest answer the question. If you want to hear the original question and answer visit the podcast site: https://completedeveloperpodcast.com/episode-140/, Lets start.

What is Unit Testing?

Unit Testing is the ability to test in isolation. That is to simply test an application without affecting the rest of the other test.

Why is Unit Testing important?

We use unit testing to make sure each unit performs as intended. Unit testing is important because it minimizes the risk of error in your software, but it also forces you to have better code structure. It also allows major changes in your code to happen at a lower risk. Another reason it is important is because it allows new developer to understand your software structure.

*Note that naming convention is very important so developers can understand what is being tested.*

What is the point of Test-Driven Development (TDD)?

The point of TDD is doing the right thing without making a mess. That is short iteration cycle reassuring that you and your application are doing the right thing.

According to one of the guests when writing Test-Driven Development there are 3 stages/steps:

  • Red cycle to see test fail
  • Green phase when you make the test pass
  • Refactor production code and test code.

The idea of TDD is to build the test before the code.

Other benefit for TDD other than code in an organized level is:

  • Reduces the overall bugs and make bug resolution quicker.
  • Less down time.
  • Better requirement
  • Can find problems with just running the test which it shows where code went wrong.

What are some of the things people get wrong about unit testing and TDD?

Testing your test too close to your implementation. “Test should represent the business rules not how you decided to implement the business rule. That way when you go and change stuff later on like the implementation. You want to refactor and remove thing into a different class. This doesn’t break what your test does because you will be restructuring how it is doing it”.

How do you manage complexity on a unit test, and how do you structure your overall testing projects?

“Form an overall method uncle bob (Robert Martin) suggest only having method that are five lines or less in size”. The guest that answer this question take Robert Martin suggestion to heart. He goes on and explains that if a method has more then five lines then he’ll brake it down into more than one method. If a class has more than five method then he’ll brake that down into more than one class. If a folder has more then five class than he’ll brake that down into more than one folder. If a project has more then five folder he’ll consider braking it down into more than one project. He goes on and say that he will do the same exact thing when testing except he doesn’t care much about the line length because most test case are usually 3-4 lines long. He has the arrangement where he set up the pre-content of the test and then have the action are 1-2 lines. These tests are pretty small to start with but within a test class he won’t have more than five things overall that he is testing within a test class, no more then five unit, and no more than five logical assertion which tends to reduces the size of any one thing within a test project.

What are some good practices you can use to make sure your test is maintainable on the long run?

  • “Don’t forget to refactor your test, because your test suit is just as important as your production code”.
  • “Remember you are testing small unit or small pieces that should have never enter connective dependency so that you feel comfortable substituting a mock, spy, or fake implementation. Be careful not to test your fake, mock, or spy.”
  • “Where ever the code end that’s where the test should end”.
  • “Make sure we abstract all third-party code”.

 

I will wrap this up by saying unit testing is a very important skill to have no matter what. It helps in creating clean code and reduces the risk of error. Unit testing allows software developers to make large changes in the software code at a minimal risk rate and it allows code structure to be understandable by new developers. One thing to note is when creating unit test and developing software you must make sure to have good naming convention. When developing software, its best to start with writing the test first.

My name is Yesenia Mercedes-Nunez and this has been YessyMer in the World Of Computer Science, thank you for your time until next time.

 

 

 

From the blog cs@Worcester – YessyMer In the world of Computer Science by yesmercedes and used with permission of the author. All other rights reserved by the author.

Writing Efficient Code

In the second part of Brandon Gregory’s blog post “Coding with Clarity: Part II“, he starts with the assertion that “Good programmers write code that humans can understand”. In the spirit of clarity, Gregory continues to elaborate on solid design principles that are helpful in software engineering.

The first principle is the Law of Demeter, or as Gregory puts it “the principle of least knowledge”, and describes this principle as an application of loose coupling, another design strategy that states one part of a program should not rely on others. Adhering to this principle will help ensure flexibility if new features are added or modified.

An example Gregory uses that demonstrates this principle is the use of getter/setter methods to provide access to class data as opposed to allowing classes to directly access data. By applying this tactic, it ensures that future modifications do not mess up any other parts of your program, as all modifications will be in the getter/setter methods.

The next programming practice is the Interface Segregation Principle. This is to make sure no object contains methods it doesn’t use. If a class has a bunch of methods, and not all methods are used for each instance, the better strategy is to separate that class into specific interfaces or sub classes. This is a similar goal as the strategy design pattern that we discussed in class.

However, Gregory warns us that abstraction can be taken too far. It is possible to abstract so much that the program contains an excessive number of interfaces or sub classes. The author reminds us that the goal of abstraction is to reduce complexity.

The final principle in the article is the open/closed principle. This assertion is that software should be open for extension but closed for modification. If the program is designed correctly, the implementation should perform as specified and should not be modified. Instead, to change functionality of the program, all that should be done is adding functionality, and not changing any of the existing code.

I very much found this two part series of “coding with clarity” helpful.  Almost all of the principles Gregory explains have been applicable to content we have covered in class. I found his writing style easy to follow and the particular examples he uses to demonstrate the principles are cogent and illuminating.  I recommend them to everybody looking to improve their design knowledge.

 

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.