Category Archives: Week 9

Choices

This week I read a post of Joel Spolsky, the CEO of Stack Overflow. This post talks about choices or options that software designers give users during they are using programs to accomplish their tasks.  Pull up the Tools | Options dialog box and you will see a history of arguments that the software designers had about the design of the product. Some programs ask users to make so many choices. Even most users don’t understand much about or see it unnecessary to make some of the choices; as a result, users will be distracted or confused. Asking the user to make a decision isn’t in itself a bad thing. Freedom of choice can be wonderful. People love to order espresso-based beverages at Starbucks because they get to make so many choices. The problem is that you ask users to make a choice that they don’t care about. Actually, users care about a lot less things than a software designer might think. They are using your software to accomplish a task. They care about the task and really want to accomplish it. If it’s a graphics program, they probably want to be able to control every pixel to the finest level of detail. If it’s a tool to build a web site, you can bet that they are obsessive about getting the web site to look exactly the way they want it to look. They do not, however, care one whit if the program’s own toolbar is on the top or the bottom of the window, or how the help file is indexed. They don’t care about a lot of things, and it is the designers’ responsibility to make these choices for them so that they don’t have to. Sometimes, it is very hard to make a choice between conflicting requirements and the designer couldn’t think hard enough to decide which option is really better. Therefore, designers try to abdicate their responsibility by forcing the user to decide something, they’re probably not doing their job. Someone else will make an easier program that accomplishes the same task with less intrusions, and most users will love it.

A major rule of user interface design: “Every time you provide an option, you’re asking the user to make a decision.” That means users will have to think about something and decide about it. User’s option is not necessarily a bad thing, but, in general, you should always try to minimize the number of decisions that users have to make. This doesn’t mean eliminate all choice. There are enough choices that users will have to make anyway: the way their document will look, the way their web site will behave, or anything else that is integral to the work that the user is doing and really cares about.

Article: https://www.joelonsoftware.com/2000/04/12/choices/

From the blog CS@Worcester – ThanhTruong by ttruong9 and used with permission of the author. All other rights reserved by the author.

How to design a REST API

I was looking for information about REST API (Representational State Transfer), I came across on this blog “How to design a REST API”. I found this blog to be useful, simple and has examples include. The blog and Quick Reference Card divided into 4 main sections.

First section is general concept, one of the main goals of API should be able to reach out to as many developers as possible. That mean API needs to be self-describing and simple. In security, how important is Oauth to manage Authorization, that being used by many companies. Which use the token system, without showing the real Authorization. API domain names are also important that need to be clear and straightforward.

Second is URIs, RESTful API use different approach using concrete names and not action verbs. API needs to have case consistency and plurals to make the structure clear. Versioning should be mandatory in the URL. Crud-like operations explain how HTTP Verb uses (GET, POST, PUT, DELETE) as CRUD action (READ, CREATE, UPDATE/CREATE, DELETE) with examples.

Third section is about query strings, combined API design such as filters, pagination, partial responses and sort to give query better result, also to optimize bandwidth. We can use a range query parameter. API response should contain Link, Content-Range, Accept-Range.

Finally, the Status Codes, developers will come across this. We need to return error for common case, which everybody understands. The HTTP return codes is which highly recommended for this. First case is success, which mean the API accepted the request. Second is client error, when client requests but API couldn’t accept, reasons such as 401 Unauthorized, 405 Method Not Allowed … There also Server Error, where the request is correct, but a problem occurred on the server. System will return as Status 500 Internal Server Error. This return is somewhat common, the client cannot do anything about that.

I like this summary of practices, its cover a large amount of good detailed references. The Quick Reference Card is clear and always have good examples next to it. I find that much easier to understand, since they use examples from popular sites such as Google, Facebook, Twitter. Everyone is familiar with those companies and based on the instruction, that make the examples are more reliable.

From the blog CS@Worcester – Nhat's Blog by Nhat Truong Le and used with permission of the author. All other rights reserved by the author.

Testing & Agile

For this week’s blog post, I decided to find a blog post on a topic that we haven’t explicitly covered in class. I found an article on Agile Testing and my interest in Agile as a whole drew me in. This article is on QASymphony and breaks down what Agile methodology is, some examples of Agile testing, and how to align testing with the Agile delivery process. For the sake of this post, I will assume you’re familiar with the general concept of the Agile methodology for software development, hopefully you understand the general concept of Scrum and Kanban.

One of the interesting testing strategies for Agile development is the Behavior Driven Development tests (BDD). These tests are similar to the Test Driven Development (TDD) style testing systems in traditional waterfall development cycles. They are basically a replacement. Instead of writing unit tests before code is written, the BDD tests are on a much higher level. This is how user stories are written. The development of the code is based on end-user behavior and the tests need to be readable for those who might not be particularly technical as they can often replace requirement documentation. This saves time in the long run as there is no duplication of the process for those who might not be able to read user tests in the traditional TDD style of testing. The best part about BDD testing is that the tests do not necessarily need to be written by technical team members. These can, and are often written with input from business partners, scrum masters, and product owners who might not necessarily be able to contribute when it comes to writing unit tests in the TDD format. This style also allows for testing small snippets of functionality like TDD so that one aspect of TDD that is somewhat Agile remains intact for BDD testing.

After reading through the other testing strategies and how to align testing with the Agile methodology, I realized that I’ve already been working in systems that operate like this at my summer internships. We operated our testing in a format pretty much identical to the concept of BDD where everyone in the team contributed to the different testing obstacles and even came up with test cases that would need to pass by code that was written afterwards. And like the article says, it was common for not all of the tests to pass immediately, and this is part of the concept of “failing fast” that Agile is all about. I am glad I found this article to give me more insight to how businesses around the world, including the ones I interned at, are converting to Agile and utilizing these testing strategies if they haven’t already.

Link to original article: https://www.qasymphony.com/blog/agile-methodology-guide-agile-testing/

 

From the blog CS@Worcester – The Road to Software Engineering by Stephen Burke and used with permission of the author. All other rights reserved by the author.

Journey into Software Architecture and Its Benefits

As I take another step towards my journey in software C.D.A. I dive into Software Architecture and its importance and benefits. For this weeks blog I found a blog named “15 BENEFITS OF SOFTWARE ARCHITECTURE” by  Ekaterina Novoseltseva. This blog talks about the benefits of software architecture. I will sum the main points of that blog and what I’ve learned from it.

What is something new I learned from this blog?

From reading the blog “15 benefit of software architecture” I have learned the 15 reason why having software architecture is important. I have also learned that not only its important in a sense it can be considered as the foundation for having a strong software program.

What is Software Architecture? 

The idea of software architecture is that it is considered to be the blueprint for building a software program. Usually the software architecture is the step where deciding what design would be implemented on the software and what each team member would be implementing to the program. “Architecture is an artifact for early analysis to make sure that a design approach will yield an acceptable system. Software architecture dictates technical standards, including software coding standards, tools, and platforms.” as stated by Ekaterina Novoseltseva blog.

The following are the 15 Benefits of Software Architecture:

  1. “Solid foundation” – when creating a program or project having a solid foundation is part of the Software Architecture.
  2. “Makes your platform scalable.”
  3. “Increase performance of the platform”
  4. “Reduces cost and avoids code duplicity”
  5. “Implement a vision” – where a software architecture provides the big picture.
  6. “Cost saving” – helps point out areas where money can be saved within a project.
  7. “Code maintainability” – helps programmers be able to maintain the code within a project.
  8. “Enable quicker changes” – able to change program at a fast past as what the program must do changes.
  9. “Increase quality of the platform” – making the software quality incredibility better.
  10. “Helps manage complexity”
  11. “Makes the platform faster.
  12. “Higher adaptability”
  13. “Risk management” – helps reduce the risk or chances of failure.
  14. “Reduces development time.”
  15. “Prioritize conflicting goals”

The list above is the benefits to software architecture. Through the benefits we can see the importance of software architecture. Reading this blog has made me realize that software architecture is a very important step when developing a software system/program. This has been YessyMer in the World Of Computer Science, until next time. Thank you for your time.

From the blog cs@Worcester – YessyMer In the world of Computer Science by yesmercedes and used with permission of the author. All other rights reserved by the author.

Unified Modeling Language (UML)

curso-de-uml-2-Nowadays, Unified Modeling Language has made it easier to describe the software systems, business systems, and any other systems. Their graphics show an explanation with words and pictures also, which proves that UML is practical and anybody should be able to use it. UML first appeared in 1997 and its content is controlled by the Object Management Group. The primary contributors to UML are Grady Booch, James Rumbaugh, and Ivor Jacobson.

UML Basic Notation

notation_class

Why UML?

UML has been able to unify the terminology and different notations, which leads to a great communication between all parties, on different departments of any company. It is also much easier for co-workers to get access or transfer information, while they are working on the same project. UML seems to be a very powerful modeling language, which makes it perfect to use, even for small projects. Stereotypes can extend its functionality if it is not sufficient for some kind of projects. UML did not come from anywhere, it started from real-world problems with existing modeling language that needed modification or transformation. This is why it is widely supported because it guarantees usability and functionality, based on real-life problems.

UML 2.0 defines 13 different types of diagrams, where each of them may be expressed with different details. They are classified into three categories: Structural diagrams, The Behavioral Diagrams, and The Interaction Diagrams.

–          The Structural Diagrams represent elements that are static in nature and they can be fundamental to the UML modeling of a system. This contains:

The Class diagram, The Component diagram, The Composite Structure diagram, The Deployment diagram, The Object diagram, The Package diagram.

–          The Behavioral Diagrams represent the modeling of how the system functions. This contains:

Use Case Diagram, Activity Diagram, State Machine Diagram.

–          The Interaction Diagrams represent how the flow of data and control work on the modeling system. This contains:

Communication Diagram, Sequence Diagram, UML Timing Diagram, Interaction Overview Diagram,

 

As a conclusion, the Unified Modeling Language is an internationally accepted standard that is used for Object-Oriented Modeling and can be used to represent a model that adopts the best software architecture practices.

References:

https://commons.wikimedia.org/wiki/Unified_Modeling_Language

https://en.wikipedia.org/wiki/Unified_Modeling_Language

https://www.geeksforgeeks.org/unified-modeling-language-uml-introduction/

From the blog CS@Worcester – Gloris's Blog by Gloris Pina and used with permission of the author. All other rights reserved by the author.

B7: Black-Box vs. Gray-Box vs. White/Clear-Box Testing

http://blog.qatestlab.com/2011/03/01/difference-between-white-box-black-box-and-gray-box-testing/

          I found an interesting bog post this week that talked about the differences between White Box, Black Box, and Gray Box Testing. It started with black box testing by explaining that it is an approach where the tester has no access to the source code or any part of the internal software. The basic goal of this testing type is to make sure that inputs and outputs work from the point of view of a normal user. The blog goes on to talk about the main features of this testing type by explaining that the test design is based on the software specification and detects anything from GUI errors to control flow errors. It is a short prep test which helps make the whole process much faster but lacks the needed detail to test individual parts of the software. The blog goes on to talk about white box testing which allows the tester to have access to the source code. This testing allows the tester to know which line of code corresponds with which functionality. This allows more detail for individual tests on code functionality. It allows more detailed testing with the ability to anticipate potential problems but takes more time and can be complex/expensive. As for grey box testing, the blog explains that it is an approach where testers only have a partial understanding of the internal structure. The advantages and disadvantages of this technique are incorporated from the related white and black box testing.

       I chose this article because I remember that we learned about this subject in the beginning of the year. I wanted a better knowledge about the advantages and disadvantages which is what sparked my initial curiosity. I found that this content was really interesting because it explains how testing can be when dealing with different amounts of access to data. I enjoyed how the post explained the types of testing sequentially while also explain how they can be different from each other. I was able to grasp an understanding of these testing types while also understanding the importance and vital role that they play depending on the situation. The most interesting part of the blog post was the grey box testing because it combines the aspects of black and white box testing. It also deals with regression testing and matrix testing which are very important when testing code bits of code at a time. I found that the diagrams used in the post allowed an easier flow to the readings which helped me understand it better. I found the blog to be a great source that summarized and simplified the detailed ideas within the post.

From the blog CS@Worcester – Student To Scholar by kumarcomputerscience and used with permission of the author. All other rights reserved by the author.

Visitor Design Pattern

Visitor Design Pattern

For this week’s blogpost I will be discussing another design pattern, this time I will talk about the Visitor Design Pattern discussed on Source Making’s website. Essentially the Visitor design pattern allows you to add methods to classes of different types without much altering to those classes. Allowing you to make completely different methods depending on the class used. Because of this you can also define external classes that can then extend other classes without majorly editing them.   Its primary focus is to abstract functionality that can be applied to an aggregate hierarchy of element objects. This promotes designing lightweight element classes due to the processing functionality being removed from the list of their responsibilities. New functionality can be added later easily by creating a new Visitor subclass. The implementation of Visitor beings when you create a visitor class hierarchy that defines a pure virtual visit() method in the abstract base class for each of the concrete derived classes in the aggregated node hierarchy. Form here each visit() method accepts a single argument – a pointer or reference to an original Element derived class. In short, each operation to be supported is modelled with a concrete derived class of the visitor hierarchy. Adding a single pure virtual accept() method to the base class of the Element hierarchy allows accept() to be defined to receive a single argument. The accept method also causes flow of control to find the correct Element subclasses, once a visit method is involved the flow of control is vectored to the correct Visitor subclass. The website listed below goes into much more detail into what exactly happens but here I summed it up so that most should be able to follow. But essentially the visitor pattern makes adding new operations easy, simply add a new Visitor derived class but if subclasses are not stable keeping everything in sync can be a bit of a struggle.

Example of Visitor Design Pattern

The example they use in the article is that based around the operation of a taxi company. When somebody calls a taxi company (literally accepting a visitor), the company dispatches a cab to the customer. Upon entering the taxi, the customer, or Visitor, is no longer in control of his or own transportation but the taxi driver is.

All in all, I thought this website was really good at explaining what the Visitor Design Pattern is. I have used this website before for previous research into design patterns and more.

 

https://sourcemaking.com/design_patterns/visitor

From the blog CS@Worcester – Matt's Blog by mattyd99 and used with permission of the author. All other rights reserved by the author.

Keep Software Design Simple

Good day my dear reader! With it being quite deep in the semester now, one can correctly assume that I have learned quite a bit regarding Software Design and I have. With all this new information, I read an article that has helped me put all of it in perspective. This article was Simplicity in Software Design: KISS, YAGNI, and Occam’s Razor from the Effective Software Design blog.

This particular blog post is all about keeping your software design simple and details three different design principles to keep in mind while designing software and the effects of making a design too complex and conversely too simplistic.

The first principle is, “Keep it simple, stupid” more commonly known as KISS heralding that keeping designs simpler is a key to a successful design.

“The KISS principle states that most systems work best if they are kept simple rather than made complex; therefore simplicity should be a key goal in design and unnecessary complexity should be avoided.”

The next principle is, “You Aren’t Gonna Need It” or YAGNI stating that functionality should be added only when it is actually needed and not before.

The last principle is Occam’s Razor, saying that when creating a design, we should avoid basing our designs in our own assumptions.

Not following these principles can result in complex designs that lead to feature creep, software bloat, and over-engineering. It could alternatively result in oversimplification where the design is too simplistic. This could lead to trouble down the line in areas such as maintainability, extensibility, and reusability.

Reading this blog post, made me sit back and think about my previous programming assignments. Looking back, my programs have been indeed, overly complex. A great quote provided in the blog post is from Brian Kernighan.

“Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.”

This gave me a good chuckle and is something I can definitely agree with and have done in the past. I will admit, that I had never considered the consequences of oversimplification. When the majority of your programs are single use for a specific scenario, you never really have to consider the consequences outside of getting the program done. An excellent quote provided by the author in the blog is from Albert Einstein.

“Keep it simple, as simple as possible, but not simpler.”

I completely agree with the author here that this quote expresses the danger of simplistic design that must be considered. It is easier to make things too simplistic rather than hitting the sweet spot of as simple as it needs to be.

This article will be one that I intend to keep in mind whenever I sit down to start a new programming project, especially as class projects start to roll up this time of the semester and I do believe that simple software design will greatly improve any designs that I come up with, keeping KISS, YAGNI, and Occam’s Razor in mind.

From the blog CS@Worcester – Computer Science Discovery at WSU by mesitecsblog and used with permission of the author. All other rights reserved by the author.

On the Differences of Test Doubles

In his post “Test Double Rule of Thumb“, Matt Parker describes the five types of test doubles in the context of a “launchMissile” program. Using a Missile object and a LaunchCode object as parameters, if the LaunchCode is valid, it will fire the missile. Since there are dependencies to outside objects, test doubles should be used.

First, we create a dummyMissile. This way we can call the method using our dummy, and test the behavior of the LaunchCodes. The example the Parker gave us is when testing given expired launch codes, the missile is not called. If we passed our dummy missile and invalid LaunchCodes to the function, we would be able to tell if the dummy variable was called or not.

The next type of test double Parker explains is the spy. Using the same scenerio, he creates a MissleSpy class which contains information about whether or not launch() was called, the flag would be set to true, and we could determine if the spy was called.

The mock is similar to the spy. Essentially, a mock object is a spy that has a function to verify itself. In our example, say we wanted to verify a “code red”, where the missile was disabled given an invalid LaunchCode. By adding a “verifyCodeRed” method to our spy object, we can get information from the object itself, rather than creating more complicated tests.

Stubs are test doubles that are given hard-coded values used to identify when a particular function is called. Parker explains that all these example tests have been using stubs, as they all return a boolean, where in practice real launch codes would be provided. But we do not need to know about the LaunchCodes to test our LaunchMissile function, so stubs work well for this application.

The final type of double Parker describes is the fake. A fake is used when there’s a combination of read and write operations to be tested. Say we want to make sure multiple calls to launch() are not satisfied. To do this, he explains how a database should be used, but before designing an entire database he creates a fake one to verify that it will behave as expected.

I found this post helpful in illuminating the differences between the types of doubles, as Parker’s use of simple examples made it easy to follow and get additional practice identifying these testing strategies.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.

The Four Levels of Testing in Software

For this week, I have decided to read “Differences between the different levels of testing” from the ReqTest blog. The reason I have chosen to read this blog is because it is crucial to understand the basis for each testing level. It will help in understanding the system process in terms of test levels even if it is short and simple.

This blog post basically goes over the four recognized levels of testing. They are unit or component testing, integration testing, system testing, and acceptance testing. Unit or component testing is the most basic type of testing and is performed at the earliest stages of the development process. It aims to verify each part of the software by isolating it and then perform tests for each component. Integration testing is testing that aims to test different parts of the system to access it if they work correctly together. It can be adopted as a bottom up or top down method based on the module. System testing is testing all components of the software to ensure the overall product meets the requirements specified. It is a very important step in the process as the software is almost done and need confirmation. Acceptance testing is the level of testing whether a product is all set or not. It aims to evaluate whether the system compiles with the end-user requirements and if it is ready to be deployed. These four types of testing should not only be a hierarchy but also as a sequence for the development process. From all these testing levels that show the development process, testing early and testing frequently is well worth the effort.

What I think is interesting from this blog is the simplicity of explaining each testing level. Each testing level in the blog has a small definition in what they suppose to do, when they are used, and an example in the process. This content has changed the way I think it would work by giving the explanations in a format that is not too complicated to follow.

Based on the contents of this blog, I would say that this blog is short and very easy to understand. I do not disagree with the content given by this blog because the ideas given for the testing levels do connect for the development process. For future practice, I shall try to adopt a mind of constant alertness to my projects with these four levels. That way, I can be more aware for detecting software errors.

Link to the blog: https://reqtest.com/testing-blog/differences-between-the-different-levels-of-tests/

From the blog CS@Worcester – Onwards to becoming an expert developer by dtran365 and used with permission of the author. All other rights reserved by the author.