Author Archives: funinfunction

CS@Worcester – Fun in Function 2018-01-18 12:13:19

This is the introductory post for the CS-448 course of Spring 2018.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-12-11 23:50:09

The blog post this is written about can be found here.

This week I decided to look into software frameworks, and I picked this blog post because of its concise explanation and because it included advantages and disadvantages to using them.

Software frameworks provide developers with ways to create applications without starting from scratch. Instead of writing every piece of functionality, you only have to write the pieces that are unique to your application. Like a framework for a building under construction, they’re bare-bone essentials for the type of project you want to create. Using a framework allows software to be developed more quickly and with higher quality, as software frameworks are pre-tested. With less to worry about coding and testing themselves, developers can focus on fulfilling their specific requirements instead of reinventing the wheel.

Software frameworks adhere to the inversion of control design principle, in which the general framework instantiates and invokes the objects and methods specific to your application. This contrasts with using a software library, in which a custom application instantiates and invokes the objects and methods that belong to the library.

Some other advantages to using software frameworks, as the blogger writes, are that they can encourage better programming practices and appropriate use of design patterns. Upgrades to the framework can also provide benefits to the framework users without them having to do additional coding of their own. Lastly, software frameworks are by definition extensible.

Among the downsides are that if you wish to create your own software framework, it’s more difficult and time-consuming to create the first application that utilizes it than it would be without the framework. However, if well-made, the development and testing effort will be reduced in all future projects that utilize it.

Another disadvantage is that frameworks can be difficult to learn. This can negate the advantages of the framework for the first project a developer uses it for similarly to how they’re negated for the first project that utilizes a brand new framework. Finally, frameworks can grow increasingly complex over time with updates and additions.

Though they certainly have their disadvantages, software frameworks seem like an intuitive solution to me. If I’m tasked with doing something a hundred times with slight variations, after a while, it only makes sense to find the commonality between all the instances and use that as a base to add the variations to. Additionally, in the future, I will keep an eye out for whether software frameworks are available for the type of project I’m trying to create and use them if the benefits outweigh the effort needed to learn them.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-12-11 23:14:26

The article this blog post is written about can be found here.

This article gives an overview of web application testing, which has several aspects unique to it compared with other software testing. I chose the article from a few sources on web testing because it was easily understandable while providing enough detail to get a feel for what’s involved.

One of the forms of testing highlighted in this article is usability testing. This includes verifying that there’s a consistent look and feel throughout the site, the application is easy to navigate, and it’s clear to users what options they have available to them. The article also makes note of the 1998 amendment to section 508 of the Rehabilitation Act, which outlines accessibility requirements for people with disabilities on information technology systems belonging to the US federal government. Section 508 compliance isn’t necessary for any non-federal website, but making sure to include accessibility features opens your web application up to a wider audience. The article gives the example of what your application should do if a user fails to enter a required field: simply changing the field title’s color to something noticeable like red, as is commonplace, wouldn’t be useful for someone who has trouble distinguishing colors. Another visual cue like an asterisk would be useful in this situation.

HTML verification is another form of testing for web applications. Testing for correct syntax is the obvious form, but it also includes testing the way your application displays across different internet browsers, OSes, screen resolutions, and device types. Your application may be usable and look great in one context, but break in another.

Load testing must also be done on applications that are intended to be accessed through the internet. They have to be able to function during times of high traffic, and testing of this sort can be used to find bottlenecks. In addition, performance tuning is prudent. All pages of your application should load quickly – the article suggests within 15 seconds.

User acceptance testing is used to determine whether your application does what it set out to do and makes something easier for the user instead of harder. One way this can be done is with a beta release.

Finally, of extreme importance in web applications is security testing, which should be done by qualified security specialists. The damage that can be done if this is neglected is immense.

This article gave me a good introduction to the additional testing required for web applications. Of all the details, I found adherence to section 508 especially interesting. It might not be a legal requirement for anything I design in the future, but if I ever do design a web application destined for the real internet, I will definitely want to make it accessible.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-12-04 23:52:05

The article this blog post is written about can be found here.

I chose this article for this week because I was curious about integration testing, as most of what we’ve done up until now has been unit testing.

Integration testing, broadly speaking, is defined as testing used to determine if the connection between two systems work. Systems can be a lot of different things – different parts of the code in one software product, multiple software products working together, your code and a database, a database and the internet, etc. Sometimes individual pieces can work fine on their own, and yet the whole breaks down once they are combined.

The article offers a scenario where someone takes a picture, uploads it to twitter with a caption, and sends the link to a friend as an example of where faults discovered by integration testing might be found. If one of those steps fails and the tweet never shows up in TweetDeck, testing would have to be done to determine where in the chain of connections the fault lies, and then what specifically went wrong within that connection. The article suggests starting this process by reviewing the log files, which should offer an indication of how far the tweet managed to get.

The article gives electronic health record systems as another example of complex systems where integration testing is needed. There are about 20 different popular EHR systems in use, as well as ones created by healthcare companies, which all store data in one way and send it out in a different way. Records go out to insurance companies that want to receive them in different formats. There isn’t one record to represent all the medical information about one person, but scattered records containing different information based on what each party needs. This situation demands thorough integration testing of the connections between the EHR systems, billing companies, and insurance companies. With so much that varies, there’s a lot of opportunity for failure.

Reading this article helped me understand the different scales on which integration testing operates, and that I can’t think of a piece of software as existing by itself – it’s going to interact with others’ code and with elements in the outside world. It’s necessary to consider not just the software itself but the bigger picture. With this in mind, I will think about the ways in which my code interacts with other components and have an idea of where to start testing those interactions.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-12-03 21:24:34

The blog post this is written about can be found here.

I researched architectural decision records this week. I picked this blog post because it’s both informative and self-demonstrating; the post is itself structured like an architectural decision record. It starts by explaining the context, or the problem that produced the need for the solution provided by ADRs. Developers who are added to an ongoing project come in without knowing why or how decisions about the structure of the project were made. They are left with two options: blindly accept the decisions that have been made, or blindly change them. Neither of these are desirable. Additional context includes the fact that people are less likely to read or update long documents, so the solution to this problem should be a brief document that serves to keep track of the architectural decisions made for the project, why they were made, and their consequences.

The types of decisions recorded in ADRs are decisions that affect the structure of the software, its non-functional characteristics, its interfaces, dependencies, and construction techniques. They describe the set of forces that went into the decision-making, some of which are likely to be opposed to each other. They describe the decision reached in response to those forces, as well as the status of that decision. Decisions might be proposed, accepted, or superseded. It’s good to keep records of decisions even after they’ve been replaced, so anyone looking back can see the whole picture of how the project progressed. Finally, the record explains the consequences of this decision, which can be positive, negative, or neutral. The consequences of one decision often become the context of future decisions.

Agile software development is often thought of as opposed to documentation, but in actuality, it’s only opposed to useless documentation. ADRs can be extremely valuable for a software development team, particularly developers rotating through projects. The blog writer mentions that his team has been using architectural decision records for roughly three months, and in that time, every one of the six to ten developers being rotated through projects said that they appreciated the context they got by reading the ADRs. Additionally, they can be useful for the project’s stakeholders, who will want something brief to read to understand how the project is progressing.

Having read this blog post, I can imagine how ADRs might be used in some of the example projects we’ve seen in class. While refactoring the duck class, we might have created documents explaining our decisions to use the strategy pattern, the singleton pattern, and the factory pattern. Someone who came into the project after our final version might be confused by the seeming complexity, and records of those decisions would serve to clarify the significance of our decisions to them.

Going forward, I will keep in mind the value of ADRs and use them appropriately.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-11-27 23:31:16

This week’s resource can be found here.

I chose this resource on code review because we’ll be doing a code review ourselves this week, and it not only outlines several types of code review but also includes interesting additional information like statistics about which code review practices produce the best results.

There are a few unexpected benefits of reviewing code. Situations that foster communication between programmers about the code they’ve written help distribute the sense of ownership over any particular piece of code, which is useful because blaming each other for faults in the code doesn’t help anybody. Code review can also serve as a good educational tool for newer developers, as more experienced developers can point out ways to write cleaner code and shortcuts to solve common problems. A more obvious benefit is that human inspection is the best way to find complicated problems in the software’s requirements, design, scalability, error handling, and other non-code aspects like its legibility. Lastly, knowing your code is going to be reviewed demonstrably improves the quality of your code.

There are several methods of lightweight code review that fit in with modern development. Code review can be done in an email thread, for example. The benefit to this is its flexibility, as it doesn’t require everyone meeting at the same time. The downside is that the original coder ends up with a bunch of different suggestions that they have to sort through instead of the group reaching a consensus on what should be done.

Another lightweight method is pair programming, where two programmers work on the same piece of code and check each other’s work as they go. The benefit to this is that code review happens automatically during the development process. The downside is that the coders can’t get much distance from their project, so they lose the advantage of a fresh set of eyes looking at it.

Third is the over-the-shoulder method, in which someone reads your code while you explain to them why you wrote it that way. This is intuitive and very lightweight, but problems can arise if you don’t document what happens at this meeting.

Tool-assisted code review involves using software to help with the review process. Programmers can contribute reviews at different times and without being in the same location using this method, which offers the flexibility of reviewing by email thread with more organization. You still miss out on the benefits of meeting and discussing in person, however.

In addition to making me aware of several different code review options, this resource has provided statistics on how best to use these methods. Code should be reviewed in chunks of under 400 lines, as defects stop being uncovered with anything more. Reviewing 300 lines of code per hour or under and a total review time of under an hour results in the best defect detection. Defect detection drops significantly after 90 minutes of review time. These are very useful things to keep in mind.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-11-20 21:07:22

The article this blog post is written about can be found here.

I decided this week to research the Law of Demeter, also known as the principle of least knowledge. I chose this article specifically to write about because it provides a source code example in Java, it gives examples of the sort of code that the Law of Demeter is trying to prevent, and it explains why writing code like that would be a bad idea.

The Law of Demeter, as applied to object-oriented programming, is a design principle consisting of a set of rules about which methods an object should be able to call. The law states that an object should be able to call its own methods, the methods of arguments that are passed to the object as parameters, the methods of objects created locally, the methods of objects that are instance variables, and the methods of objects that are global variables. The general idea behind it is that objects should know as little as possible about the structure or properties of anything besides itself. More abstractly, objects should only talk to their immediate friends, not to strangers. Adhering to the Law of Demeter creates classes that are loosely coupled, and it also follows the principle of information hiding.

The sort of code that the Law of Demeter exists to prevent are chains of function calls that look like this:

objectA.getObjectB().getObjectC().doSomething();

The article explains that this is a bad idea for several reasons. ObjectA might get its reference to ObjectB removed during refactoring, as ObjectB might get its reference to ObjectC removed. The doSomething() methods in ObjectB or ObjectC might change or get removed. And since your classes are tightly coupled when you write code like this, it will be much harder to reuse an individual class. Following the law will mean your classes will be less affected by changes in other classes, they’ll be easier to test, and they will tend to have fewer errors.

If you want to improve the bad code so it adheres to the Law of Demeter, you need to pass ObjectC to the class containing the original code in order to access its doSomething() method. Alternatively, you can create wrapper methods in your other classes which pass requests onto a delegate. Lots of delegate methods will make your code larger and slower, but it will also be easier to maintain.

Before reading this article, seeing a chain of calls like that probably would have made me instinctively recoil, but I wouldn’t have been able to explain exactly what was wrong. The bad examples given in this article clarified the reasons why code like that is weak. The article also gave me concrete ways I can follow the Law of Demeter in future code.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.