Author Archives: funinfunction

CS@Worcester – Fun in Function 2017-12-04 23:52:05

The article this blog post is written about can be found here.

I chose this article for this week because I was curious about integration testing, as most of what we’ve done up until now has been unit testing.

Integration testing, broadly speaking, is defined as testing used to determine if the connection between two systems work. Systems can be a lot of different things – different parts of the code in one software product, multiple software products working together, your code and a database, a database and the internet, etc. Sometimes individual pieces can work fine on their own, and yet the whole breaks down once they are combined.

The article offers a scenario where someone takes a picture, uploads it to twitter with a caption, and sends the link to a friend as an example of where faults discovered by integration testing might be found. If one of those steps fails and the tweet never shows up in TweetDeck, testing would have to be done to determine where in the chain of connections the fault lies, and then what specifically went wrong within that connection. The article suggests starting this process by reviewing the log files, which should offer an indication of how far the tweet managed to get.

The article gives electronic health record systems as another example of complex systems where integration testing is needed. There are about 20 different popular EHR systems in use, as well as ones created by healthcare companies, which all store data in one way and send it out in a different way. Records go out to insurance companies that want to receive them in different formats. There isn’t one record to represent all the medical information about one person, but scattered records containing different information based on what each party needs. This situation demands thorough integration testing of the connections between the EHR systems, billing companies, and insurance companies. With so much that varies, there’s a lot of opportunity for failure.

Reading this article helped me understand the different scales on which integration testing operates, and that I can’t think of a piece of software as existing by itself – it’s going to interact with others’ code and with elements in the outside world. It’s necessary to consider not just the software itself but the bigger picture. With this in mind, I will think about the ways in which my code interacts with other components and have an idea of where to start testing those interactions.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-12-03 21:24:34

The blog post this is written about can be found here.

I researched architectural decision records this week. I picked this blog post because it’s both informative and self-demonstrating; the post is itself structured like an architectural decision record. It starts by explaining the context, or the problem that produced the need for the solution provided by ADRs. Developers who are added to an ongoing project come in without knowing why or how decisions about the structure of the project were made. They are left with two options: blindly accept the decisions that have been made, or blindly change them. Neither of these are desirable. Additional context includes the fact that people are less likely to read or update long documents, so the solution to this problem should be a brief document that serves to keep track of the architectural decisions made for the project, why they were made, and their consequences.

The types of decisions recorded in ADRs are decisions that affect the structure of the software, its non-functional characteristics, its interfaces, dependencies, and construction techniques. They describe the set of forces that went into the decision-making, some of which are likely to be opposed to each other. They describe the decision reached in response to those forces, as well as the status of that decision. Decisions might be proposed, accepted, or superseded. It’s good to keep records of decisions even after they’ve been replaced, so anyone looking back can see the whole picture of how the project progressed. Finally, the record explains the consequences of this decision, which can be positive, negative, or neutral. The consequences of one decision often become the context of future decisions.

Agile software development is often thought of as opposed to documentation, but in actuality, it’s only opposed to useless documentation. ADRs can be extremely valuable for a software development team, particularly developers rotating through projects. The blog writer mentions that his team has been using architectural decision records for roughly three months, and in that time, every one of the six to ten developers being rotated through projects said that they appreciated the context they got by reading the ADRs. Additionally, they can be useful for the project’s stakeholders, who will want something brief to read to understand how the project is progressing.

Having read this blog post, I can imagine how ADRs might be used in some of the example projects we’ve seen in class. While refactoring the duck class, we might have created documents explaining our decisions to use the strategy pattern, the singleton pattern, and the factory pattern. Someone who came into the project after our final version might be confused by the seeming complexity, and records of those decisions would serve to clarify the significance of our decisions to them.

Going forward, I will keep in mind the value of ADRs and use them appropriately.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-11-27 23:31:16

This week’s resource can be found here.

I chose this resource on code review because we’ll be doing a code review ourselves this week, and it not only outlines several types of code review but also includes interesting additional information like statistics about which code review practices produce the best results.

There are a few unexpected benefits of reviewing code. Situations that foster communication between programmers about the code they’ve written help distribute the sense of ownership over any particular piece of code, which is useful because blaming each other for faults in the code doesn’t help anybody. Code review can also serve as a good educational tool for newer developers, as more experienced developers can point out ways to write cleaner code and shortcuts to solve common problems. A more obvious benefit is that human inspection is the best way to find complicated problems in the software’s requirements, design, scalability, error handling, and other non-code aspects like its legibility. Lastly, knowing your code is going to be reviewed demonstrably improves the quality of your code.

There are several methods of lightweight code review that fit in with modern development. Code review can be done in an email thread, for example. The benefit to this is its flexibility, as it doesn’t require everyone meeting at the same time. The downside is that the original coder ends up with a bunch of different suggestions that they have to sort through instead of the group reaching a consensus on what should be done.

Another lightweight method is pair programming, where two programmers work on the same piece of code and check each other’s work as they go. The benefit to this is that code review happens automatically during the development process. The downside is that the coders can’t get much distance from their project, so they lose the advantage of a fresh set of eyes looking at it.

Third is the over-the-shoulder method, in which someone reads your code while you explain to them why you wrote it that way. This is intuitive and very lightweight, but problems can arise if you don’t document what happens at this meeting.

Tool-assisted code review involves using software to help with the review process. Programmers can contribute reviews at different times and without being in the same location using this method, which offers the flexibility of reviewing by email thread with more organization. You still miss out on the benefits of meeting and discussing in person, however.

In addition to making me aware of several different code review options, this resource has provided statistics on how best to use these methods. Code should be reviewed in chunks of under 400 lines, as defects stop being uncovered with anything more. Reviewing 300 lines of code per hour or under and a total review time of under an hour results in the best defect detection. Defect detection drops significantly after 90 minutes of review time. These are very useful things to keep in mind.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-11-20 21:07:22

The article this blog post is written about can be found here.

I decided this week to research the Law of Demeter, also known as the principle of least knowledge. I chose this article specifically to write about because it provides a source code example in Java, it gives examples of the sort of code that the Law of Demeter is trying to prevent, and it explains why writing code like that would be a bad idea.

The Law of Demeter, as applied to object-oriented programming, is a design principle consisting of a set of rules about which methods an object should be able to call. The law states that an object should be able to call its own methods, the methods of arguments that are passed to the object as parameters, the methods of objects created locally, the methods of objects that are instance variables, and the methods of objects that are global variables. The general idea behind it is that objects should know as little as possible about the structure or properties of anything besides itself. More abstractly, objects should only talk to their immediate friends, not to strangers. Adhering to the Law of Demeter creates classes that are loosely coupled, and it also follows the principle of information hiding.

The sort of code that the Law of Demeter exists to prevent are chains of function calls that look like this:

objectA.getObjectB().getObjectC().doSomething();

The article explains that this is a bad idea for several reasons. ObjectA might get its reference to ObjectB removed during refactoring, as ObjectB might get its reference to ObjectC removed. The doSomething() methods in ObjectB or ObjectC might change or get removed. And since your classes are tightly coupled when you write code like this, it will be much harder to reuse an individual class. Following the law will mean your classes will be less affected by changes in other classes, they’ll be easier to test, and they will tend to have fewer errors.

If you want to improve the bad code so it adheres to the Law of Demeter, you need to pass ObjectC to the class containing the original code in order to access its doSomething() method. Alternatively, you can create wrapper methods in your other classes which pass requests onto a delegate. Lots of delegate methods will make your code larger and slower, but it will also be easier to maintain.

Before reading this article, seeing a chain of calls like that probably would have made me instinctively recoil, but I wouldn’t have been able to explain exactly what was wrong. The bad examples given in this article clarified the reasons why code like that is weak. The article also gave me concrete ways I can follow the Law of Demeter in future code.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-11-20 19:03:56

The blog post this is written about can be found here.

I picked this blog post because because we’ve been utilizing mock objects in class lately, and this post explains in-depth the logic behind using them in addition to succinctly summarizing the different types of mock objects.

Using mock objects focuses a test on the specific code we want to test, eliminating its dependencies on other pieces of code we don’t care about at the moment. This way, if a test fails, we can be sure it’s because of a problem in the code under test and not in something called by it. This greatly simplifies searching for faults and reduces time spent looking for them.

Mock objects also serve to keep the test results consistent, especially when the real object you’re creating a mock of can undergo unpredictable changes. If you utilize a changing database, for instance, your test might pass one time and then fail the next, which gives you no useful information.

Mock objects can also reduce the time necessary to run tests. If code would normally call outside resources, running hundreds of tests which utilize the actual code could take a long while. Mocks of these resources would respond much more quickly. Obviously we want to test calls to the actual resources at some point, but they aren’t necessary in every instance.

“Mock” is also used as a generic term for any kind of imitation object used to replace a real object during testing, of which there are several. Fakes return a predictable result, but the result isn’t based on the logic used to obtain a result in the real object. Stubs return a specific result in response to specific input, but they aren’t equipped to handle other inputs. Stubs can also retain information about how they were called, such as how many times and with what data. Mocks are far more sophisticated versions of stubs which will return values in similar ways, but can also hold expectations about how many times each method should be called, in which order, and with what data. Mocks can ensure that the code we’re testing is using its dependencies the exact way we want it to. Spies replace the methods of the real object a test wants to call instead of acting as a stand-in for the object. Dummies are objects that are passed in place of another object, but never used.

Creating the most sophisticated type of mock seems like it might take more time than it’s worth, but existing mocking frameworks can take care of most of the work of creating mock objects for your tests.

In the future I expect to write tests that utilize mocking. This post, along with Martin Fowler’s article, has given me a good starting point in being able to utilize them effectively as well as decide how elaborate a mock needs to be for a particular test.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-11-20 17:12:09

The blog post this information is sourced from can be found here.

I chose this blog post because it briefly summed up the general responsibility assignment software patterns, which I wanted to learn about. GRASP are patterns that serve as guidelines to which classes or objects should be assigned responsibilities. There are nine patterns in total.

A general method for deciding which class to assign a responsibility to is to assign it to the information expert: the class that has the necessary information to carry out the responsibility in full.

When trying to decide which object should be responsible for creating new instances of a class, the Creator pattern says that the responsibility of creating Class A instances should be assigned to Class B if Class B contains instances of Class A or gathers instances of Class A into one place, if Class B keeps a record of Class A objects, if Class B is closely associated with Class A objects, or if Class B has all the information necessary to create a Class A object – that is, it’s an information expert about the responsibility of creating Class A objects.

The controller pattern answers the question of what should handle input system events. Controllers can represent the entire system, device, or subsystem, in which case they’re referred to as façade controllers. They can also be use case or session controllers, which handle all system events of a use case. The controller doesn’t itself do the work, but instead delegates it to the appropriate objects and controls the flow of activity.

The low coupling pattern holds that responsibilities should be assigned in such a way that coupling remains as low as possible – that is, there is low dependency between classes, high reuse potential, and changes in one class have a low impact on other classes

High cohesion means the responsibilities in a class are highly related or focused on a single goal. The amount of work one class does should be limited, and classes shouldn’t do lots of unrelated things.

If behavior varies based on type, polymorphism holds that the responsibility of defining that variation should be assigned to the types for which the variation occurs.

Pure fabrication classes exist to maintain high cohesion when assigning responsibilities based on the information expert pattern would not. They don’t represent something in the problem domain.

Indirection maintains low coupling by assigning mediation responsibilities between two classes to an intermediate class.

The protected variations pattern assigns responsibilities that create a stable interface around points of likely variation or instability to protect other parts of the system from being affected by any variation/instability.

This post gave me ideas I can refer back to when solving the problem of which of several classes should be assigned a particular responsibility. The problems addressed by these design patterns exist in just about every software development project, so there’s no doubt I will find it useful in the future.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-11-13 23:46:47

Knowing what you know

knowing what you don’t know

not having a means to discover what you don’t know

not knowing what means exist to learn what you don’t know

not knowing how knowledge acquisition works

can we answer these questions concretely or only philosophically?

What is quality?

one answer: How well the customer or user of a thing views it as useful/helpful

feels good to use it

easy to operate

know what it can do and can’t do and am happy with that

joy out of using it

customized, personal, based on individual

quality =/= bug-free

one piece of software has a lot of known bugs but is heavily used by thousands or hundreds of thousands of people, another one is bug-free but only used by 10 people, which is higher quality?

The context, its ability to solve a problem in that context,

does it solve the problem in the context in which we’re expecting to solve that problem, without unexpected consequences

chair w/out a seat is low-quality bc you expect to be able to sit on a chair. can’t serve its basic purpose

different chair with seat, but front right leg is 3 inches too short. Leaning back is fine but leaning forward you might fall on the floor. A higher quality chair wouldn’t “crash” like that

how delightful is it to sit in the chair

not just requirements or fitness for use

value to some person

what is a test case?

Specific action and outcome

thing u wanna try and a result you wanna verify

ex. post a message to slack, verify it shows up

1 dude argues it’s still one test case if you perform the action and verify the result across different OSes

any time inputs are applied to a program, outputs generated, and a judgment call made.

Judgment call might be different based on context

judgment calls are the hardest part of testing, and it’s based on your own definition of quality

what is testing?

1: Investigating, evaluating, learning, sometimes judging?

Vs

2: only judging, that’s what we hire testers to do

1: not only judging: giving new ideas for inputs and outputs

new ideas for tests can follow from results/judgment calls of previous tests

breaking things, discovering where things fail

identifying those judgment calls? The judgments that would put the product at risk

what’s a tester?

Keenly focused on risk-to-return

lowering risk on high-value quality propositions

what is integration testing?

Making sure all the pieces fit together

the tests you write to make sure other people don’t break you

input, output, judgment call = does your component still return the expected results that my component is counting on?

performance testing

inputs and outputs that let you know product is performing as expected

how long actions take in application over a variety of scenarios

key actions product takes and how long each of them takes in different contexts

output = time, judgment = is the time it takes acceptable/good?

Is faster always better? no. only has to be fast enough for the customer

easy to increase performance in the wrong areas. Make the parts that matter fast

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-11-13 23:23:40

The resource I discovered this week is partially a blog post, but mostly the mobile app that the blog post is about. The app is called Enki. It’s similar to the language-learning app Duolingo, but its purpose is to help people learn new software development skills in small chunks every day. The blog post explains why the app was developed: the options that currently exist for ongoing learning about software development take a lot of time, something developers tend to be short on. They can also be boring and inefficient. The app creators wanted something fun, engaging, useful, and quick. The mobile platform was selected so that users would always be able to have it with them, and therefore be able to squeeze learning into limited free time like a work commute or the time it takes for their code to compile. The daily lessons, called workouts, stick to small tips and bits of applicable information instead of getting bogged down in details that people who’ve passed the beginner stage probably already know. They’re also designed specifically with avoiding boredom in mind, so the workouts contain engaging challenges and game-like elements.

I chose this resource because it’s one of the only

books and video courses

Blog post explained why the app was developed

current options for [continuous] learning

they wanted something that could help you practice a little bit each day

Affected me immediately because I downloaded and started using the app. So far have only brushed up on html skills, but it seems interesting, and I intend to keep using it. It seems like a fun way to learn new things or to get a refresher.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-10-23 23:56:06

The blog post this is written about can be found here.

I’ve been hearing words like waterfall and agile a lot in the course of researching software development and testing for my classes, so this week I tracked down a simple blog post explaining the difference in the two development methods. The descriptions of the two lined up with the two sides pitted against each other in the time-travel argument I wrote about for the other class.

The earlier method, waterfall, is a sequential scheme in which development is split into eight stages and one stage of development follows the other with no overlap. This is a technique I’d actually heard explained unattached to the name waterfall prior to this year. In other resources, it seems to be mostly referred to in terms of its disadvantages. This post lists some of the advantages of the method. There’s no room for errors or modification when you can’t go back to the previous step without starting the whole process over again. As a consequence, extensive planning and documentation is a requirement. The waterfall methodology can to some extent ensure a very clear picture of the final product, and the documentation serves as a resource for making improvements in the future.

However, there are significant downsides that led to the creation of the agile methodology. The dependence on initial requirements means that if the requirements are incomplete or in error, the resulting software will be too. If the problems with the requirements are discovered, the developers will have to start over. All testing is pushed to the end, which means that if bugs were created early, they could have had an impact on code written later. The whole thing is a recipe for the project taking a very long time.

In contrast, developers using the agile methodology start with a simple design and then begin working on small modules for set intervals of time called sprints. After every sprint, testing is done and priorities are reexamined. Bugs are discovered and fixed quicker in this way, and the method is highly adaptable to changing requirements. This approach tends to be much quicker and is favored in modern development. It allows for adaptation to rapid changes in industry standards, the fast release of a working piece of software, and the ability for a client to give feedback and see immediate changes. The lack of a definitive plan at the beginning can be a drawback.

Having a clear picture of both of these methodologies provides useful context that will enable me to follow more in-depth discussions of software development, and there’s a good chance it will be relevant to my future career.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-10-23 22:19:09

The article referenced in this blog post can be found here.

This past week I found an article which put forward an unconventional idea: unit testing smells. I picked this article because applying the concept of code smells to test code was intriguing to me. The idea is that certain things that can happen in the course of writing and running your test code can inform you that something is not quite right with your production code. They aren’t bugs or test failures, but, like all code smells, indicators of poor design which could lead to difficulties down the line.

Firstly, the author suggests that having a very difficult time writing tests could signify that you haven’t written testable code. He explains that most of the time, it’s an indicator of high coupling. This can be a problem with novice testers especially, as they’ll often assume the problem is with them rather than the code they’re attempting to write tests for.

If you’re writing tests well enough but doing elaborately difficult things to get at the code you’re trying to test, it’s another testing smell. The author writes that this is likely the result of writing an iceberg class, which was a new term for me. Essentially, too much is encapsulated in one class, which leads to the necessity of mechanisms like reflection schemes to get to internal methods you’re trying to test. Instead, these methods should probably be public in a separate class.

Tests taking a long time to run is a smell. It could mean that you’re doing something other than unit testing, like accessing a database or writing a file, or you could have found an inefficient part of the production code that needs to be optimized.

A particularly insidious test smell is intermittent test failure. This test passes over and over again, but every once in a while, it will fail when given the exact same input as always. This tells you nothing definitive about what’s going on, which is a real problem when you’re performing tests specifically to get a definitive answer about whether your code is working as intended. If you generate a random number somewhere in the production code, it could be that the test is failing for some specific number. It could be a problem with the test you wrote. It could be that you don’t actually understand the behavior of the production code. This kind of smell is a hassle to address, but it’s absolutely crucial to figure out.

Having read this, I won’t just pay attention to whether my tests yield the expected result, but will look out for these signs of design flaws in the code being tested.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.