This is a topic that I have been wanting for a while to write about but never really got around to doing so until now. For this week, I am going to review a post made by Martin Fowler. In the first part of the blog post, Mr. Fowler explains what the acronym stands for and where the term comes from. Mr. Fowler than walks the reader through an example of when we could apply YAGNI. In the blog post, the example that he uses is a company having two teams work on two different components of a program (one for sales and the other one for pricing). The pricing team then predicts that in six months’ time, they will need to create software that handles pricing for piracy risks and Mr. Fowler argues that this violates the principle of YAGNI. He argues that because the team is making assumptions based on something that has not happened yet, the feature the team builds might be wrong or worse not needed or used at all. In which case, this will mean that the team wasted a lot of time, energy, and effort analyzing programming and testing something that may or may be useless. In the amount of time the team spent developing the precaution for piracy, they could have used that time working on a different required feature of the program. Furthermore, the feature they build could handle the problem incorrectly which means they might have to refactor this feature later down the line. It also adds an extra layer of complexity to the program and adds more items for the team to repair and maintain. In addition, Mr. Fowler argues this is a bad habit for people to get into. The habit of making precautions in advance because it is impossible to predict all possible outcomes and as he put it even if we tried, we would still end up getting “blindsided”.
I wanted to write about this topic now because when I was doing the intermediate assignment for Homework 4, I was thinking about how my approach was violating the single-responsibility principle because one of the methods was doing three different things. When realizing this, I started thinking about my other bad programming habits. One of which was I tended to think ahead and code things that I think the program would need and how it violates YAGNI. So, I decided to read and review a post on YAGNI because I wanted to learn more about why you shouldn’t code things in advance so that the next time that I code, I can avoid those mental pitfalls. I think reading about this blog post would help me a lot because sometimes I have a hard time making these realizations on my own unless someone brings it to my attention in discussion.
Let me just begin by saying I really like how this blog post was organized and written. It starts off very formally by giving definitions for the terminology that they would use throughout the post and then transitions to explain those same terms in layman terms. With the way the post is written, the author makes it so even a person without a Computer Science background can semi follow what is going on in the post because the author explains the concepts using things that everyone sees and uses on a day-by-day basis.
The post starts with an example of a web URL and how the statement after the web extension is actually a query string. Then the article goes on to talk about how you can format your search to look for a specific pattern, and max or minimum string length. After going over the basics of how to format query strings, they go to talk about how you can combine multiple criteria to further down your search.
This finds you all of the five-letter nouns in the dictionary.
The reason why I chose this particular blog post to read and talk about this week is because it is relevant to what we are learning in class. It is a topic we are going to go over in class and have already talked a little bit about this topic in class. In addition, we are using also using it this week for the homework. I chose this post because I think it does a great job explaining the topic. The post is short and worded in a way that I think is easy to understand. It also uses a lot of images and examples, so it was also easy to follow along with what the author was saying about the topic.
In class when we started talking about this topic, I did not think it was particularly difficult to understand, but another reason I chose this blog post is that I have always been the kind of person where I either “use or lose it”. I have always been the kind of person where whenever I learn something new, I need to apply that information or forget about it.
In class, I immediately made the connection that you can use query strings to refine searches in databases but after reading this blog post, I learned you can also apply it to websites. In a way, I think I have always known this because often times when I am navigating a website and want to go from one page to the next, I may just change the page number or page entry in the URL. I don’t think I would have put two and two together and realized on my own that what I was doing was modifying the query string or that I was switching from one endpoint to another.
For this week’s blog post, I am reviewing a blog post made by Gabriel Tanner. Mr. Tanner is a software engineer at Dynatrace and in this blog post, he talks about the characteristics of Docker Compose, why we should use Docker compose. Mr. Tanner starts the blog post about why we have/use Docker.
“With applications getting larger and larger as time goes by it gets harder to manage them in a simple and reliable way.”
The first feature about Docker compose that Mr. Tanner talks about is its portability and how it can construct and destruct a development environment using the docker-compose up and docker compose down commands respectively. The blog post also goes common uses of Docker compose and common reasons why people use Docker compose. Some examples of common uses of Docker are its ability to run several containers on a single host and to run your program in an isolated environment. An example of a common reason why people use Docker compose is people might want to run their program in an environment similar to the one used in the production stage. The post also goes on to talk about volumes and the different types of volumes and their syntax, networking so that our containers can communicate with one another and many other different topics.
The entire blog post is basically one big tutorial about Docker compose. It defines the features of Docker compose, gives examples of its uses, and explains why we should use it. I think this is a blog post that is worth reviewing for this class because I think it could be a really good resource to have in the class. The post is a little long, but it is very thorough. I think it would be a good way to review Docker compose before a midterm or final. In addition, the blog post also covers a lot of information that we have not done over in class, so it also provides a way for us to investigate and learn more deeply about the topic. For the first half of the blog post, it covers material that we have already covered in class but in the second half of the blog post, it covers a lot of features about Docker that we have not yet covered in class such as using multiple Docker compose files by passing an optional override file. This is a feature of Docker that I can see myself using in the future and is a feature that I wish I had learned sooner. A couple semesters ago, I was doing a project in MatLab and Java and was running several large programs on one computer. This made the project very time-consuming and difficult because it took a long time to run all of the programs, generate and collect all of the results. Had I known what I know now about Docker, I would have done a lot of things differently.
The blog post starts by identifying what are the different teams in software development and what each team does or is responsible for. It also talks about how what the development team does might be counterintuitive to what the DevOps team does and create problems for the two teams. An example of such a case would be when the development team adds a new feature because the new feature might affect the stability of the code by breaking the code which leads to problems for the DevOps team. The blog post also talks about what is Docker and containerization. It defines both terms and lists the benefits and features of both. It also talks about how Docker and containerization can solve problems that arose when we use virtual machines. For example, Docker can have faster build and testing times. Then, the blog identifies and talks about the different parts of the Docker architecture. In the post, it identified that Docker is composed of four components: images, containers, registries, and the Docker engine. The post defines each term and what each one does. The rest of the post talks about the components of the Docker Engine (such as the daemon, how to submit requests to said daemon…etc.) and why we should use docker, what are its benefits and what are its alternatives.
I personally really like this blog post made by the BNC Software company because the article is not too long or difficult to read. Personally, I have always had a terrible memory and been the kind of person where if I read something I can form some sort of an understanding about something, but I won’t completely understand or remember the topic until I get a formal definition. So, I chose this blog post to look at because I think this post does just that for me. The post explains the terminology in almost layman terms which makes it easier for me to remember and understand the material. Another reason why I chose this particular post is because I think this blog post explains the basics of Docker very well and is a great resource for us to have as we start moving on to more advanced topics in Docker. While reading this post, I think it brought up an interesting point about how the development team can interfere with the work of the people on the DevOps team. This was a question that I had before but never really thought about or put in the time to research this topic. It reminds me of what we learned in Software Process Management and LeBlanc’s Law. Starting up a Docker container takes a lot of initial effort and time commitment, but it would save us time in the long run because we would run into fewer problems of incompatible code and run less of a risk disrupting the stability of the code.
My name is Eric Nguyen and this is going to be the blog where I will be making posts for the CS-343 class. I am a Senior at Worcester State. I am pretty excited for the upcoming school year. It feels like forever since I have been in class and programming. I hope everyone had a fun and restful summer and hope to see you all soon.
For my final blog post for the class, I decided to look at a blog post from Codacy. In this blog post, they talk about how only about 15% of developers use static analysis tools and that more people should. The blog post starts by talking about in the past a lot of developers manually look for errors and bugs within their program and how this method of code analysis is very error-prone because humans are prone to making errors and are likely to miss a bug or two. I hate to admit it but I agree with them on this point. Human beings are flawed and will inevitably make a couple of mistakes. We don’t know when or how but we will. This is just a part of human nature. Computers can run many more tests than we can and very quickly find errors and save a lot of time and money in development. Even though static analysis tools are powerful, we still need to pair with some dynamic testing because static analysis cannot account for every thing that could happen or go wrong.
Mr. Bugayenko talks about the structure of a simple Java project and return values of various methods. In the article, it shows how much work it would take to mock a program that has multiple levels of abstraction and if we did mock everything, it would obscure the meaning of our tests because our tests will would have multiple levels of abstraction and makes it that much harder to see what exactly we are testing. In the end, it would also make the test file much longer than our main file. In situations like this, it would be easier to use fakes.
In POGIL activity 8, we worked with decision table-based testing and applied it to applied this concept to the ongoing graduation problem. Unlike the previous activities, we did not go over the advantages of using a decision table opposed to using any of the other testing frameworks. So for this blog post, I want to look more at the pros of using decision table-based testing.
In this article, it gives a complete step-by-step rundown of what is a decision table, why we use it for testing and how to conduct decision table-based tests.
It is basically a review of what we covered in class with pictures/diagrams at each step.
In this second article, it talks about the characteristics of decision tables and why some of these characteristics may make decision table-based testing more preferable to the other testing frameworks.
Decision table-based testing is similar to Equivalence Class testing in the sense that it divides the tests into cases and complete test coverage. Unlike Equivalence Class testing, decision tables are more versatile. One aspect that I really liked about this second article was the fact the author completed multiple action rows into just one rule by defining a key. This made the table smaller and easier to read while not obscuring the meaning within the table.
This week in class we covered what is Boundary Value Testing/Analysis and what is Equivalence Class Testing/Partitions and what are each of its subclasses.
After finishing the activities in class, I feel like the assignment really did a good job teaching us how to do each type of testing but it does not really explain why we should use each of these testing techniques.
While I think knowing how to conduct each of these testing techniques is important, the essentialist within me cannot stop thinking about the logic or the origin of these techniques. In this blog post, Mr. Eriksson gives a recap of the topics we covered in class and gives a couple short and easy to understand examples to work through. Also in this post, he gives a short explanation of the main ideas of each technique. The reason why I chose this specific blog post wasn’t because I wanted to introduce new information about the topic but because I wanted the topic a quick once-over and really nail down the topic. This blog post gives a really nice summary about what we covered in class and serves really nicely in filling the blanks in our understanding of the topic.
Recently as part of a homework assignment, I learned that in JUnit 4, you were only allowed to use one test runner. This aspect of Junit 4 intrigued me so I decided to do a little more research into the architecture of the JUnit 4 Runner Class.
In this blog by Michael Scharhag, he explains generally what are Runners, how they work and the class hierarchy. Throughout the post, he walks us through the class hierarchy by using a series of tree diagrams and code snippets. He also explains some of the more easily missed details such as what happens when we don’t pass a Runner to the @Runwith annotation. He also talks a little bit about possible pitfalls such extending the wrong class and some neat tricks you can do with the JUnit 4 Runner such as creating custom runners. Overall, I feel like this post was more on the technical side and I used the following video to look at more of the application side.
In this video, it talks more specifically about what each of the classes in the hierarchy do and what annotations to include and when.