Category Archives: Week 10

CS@Worcester – Fun in Function 2018-03-26 10:38:43

“Practice, Practice, Practice” is a pattern which encourages exactly what you’d expect: continuous practice. Like Breakable Toys, the problem this pattern is intended to address is that failure in your professional responsibilities is too costly to risk. You can’t effectively learn without the freedom to fail, though, and for that reason you need somewhere safe to practice.

This is another pattern in which the writers say the ideal application of it would be in a world with formal software apprenticeship, but in reality, software apprentices have to create an approximation for themselves. In their ideal world, a mentor would assign their apprentice practice based on the apprentice’s strength and weaknesses, reinforcing what the apprentice does well and correcting their weaknesses. The writers emphasize the need for objective metrics to evaluate your abilities as a substitute for this; if you practice without getting feedback, you will reinforce bad habits.

One way of making sure you receive regular feedback is to practice in a way that’s public to some degree. The example they give is a group that meets in person to perform code katas, exercises which are meant to help programmers to sharpen their skills through repetition. For some apprentices, an activity like this probably isn’t practical or immediately accessible. Online communities devoted to practice are another option which can likely serve as a good source of feedback. Importantly, any method of getting feedback on your practice should take place in a relaxed and playful setting, since the point is to remove the stress of mandatory success. The practice itself should be something just beyond what you know you can do easily. In having to struggle with a problem, the practice will strengthen your abilities in addition to providing the benefits of repetition.

This is a pattern that appeals to me, because I’ve discovered that I learn the most by actually writing code. To that end, the writers suggest some older books that teach important programming techniques and design principles through fun problems. This seems like a useful, concrete recommendation, and I might look into obtaining one of their suggested books or find one of my own to improve my coding skills.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

The Long Road

Problem

You aspire to become a master software craftsman, yet your aspiration conflicts with what people expect from you. Conventional wisdom tells you to take the highest-paying job and the first promotion you can get your hands on, to stop programming and get onto more important work rather than slowly building up your skills.

Solution

The solution the text offers is to first accept that what you want to become might be strange to others, and second to always think in the long term. Value learning and long term growth opportunities over salary and traditional notions of leadership. By focusing on your long term development, you are enriching yourself with a set of skills that aide learning, problem solving, and developing strong relationships with your customers. Keep in mind the length of your journey, if you have 20 years of work ahead of you, you have plenty of time to master your skills. The text mentions that this pattern is not for someone who wants to become CIOs or product managers, or filthy rich. Thankfully the software development field is constantly changing and new opportunities are always available.

I think this pattern is a good grounding for anyone who may be a little too ambitious, and may take promotions without understanding how this may affect the Long Road. Unfortunately, sometimes taking a promotion means a break from learning, and while you may be making better money, you may be setting yourself up for failure down the road, or at least you want have the same skills and knowledge you would if you continuously worked on being a software craftsman.  I do think it’s important to always consider the Long Road, where you are in your career and what kind of job you want. I think this pattern is subtlety saying that taking jobs as mangers or corporate executives may not be as rewarding as working on their craft their entire career. This pattern is a good reminder to me to keep the long road in the back of my mind and focus on always putting myself in a position where I can learn new things and hopefully avoid burn out.

The post The Long Road appeared first on code friendly.

From the blog CS@Worcester – code friendly by erik and used with permission of the author. All other rights reserved by the author.

The Dangers of Complacency

In a field that continues to rapidly evolve, staying up to date with the latest and greatest tools and techniques is essential. Complacency is simply not an option if one wishes to remain competitive and relevant in the information technology field. Not surprisingly, however, there are certain tools that we become familiar over time, through repeated use and practice. There is nothing wrong with this, as familiarity with certain tools or techniques allows for more accurate estimations about work, and helps to limit risk. In Hoover and Oshineye’s Apprenticeship Patterns, they present a pattern that helps software apprentices deal with the complexities of complacency titled Familiar Tools.

In the Familiar Tools pattern, Hoover and Oshineye start out by acknowledging how valuable it is to have a set of tools that you feel comfortable using. Not only does this make you more valuable to employers, it makes the work easier and more valuable to the developer as well. From increased productivity to more accurate estimates, familiarity is important in the progression of a software craftsman.

Although the word is never explicitly mentioned, this pattern also seems to issue a warning about complacency. Hoover and Oshineye caution apprentices from becoming too set in a narrow range of familiarity, as that puts them at more risk for becoming irrelevant should the popularity or usefulness of those familiar tools fade.

This pattern was pretty easy for me to appreciate, as I already enjoy learning and improving through personal and professional development. Perhaps this desire to stay ahead of the curve is part of the reason that I became interested in the field to begin with. I have always enjoyed staying up to date with the latest and greatest gadgets, trying out beta builds, and experimenting with technology. Although the context is a bit different in the Familiar Tools pattern, the idea is very similar. The quote by Eric Hoffer that is included in this pattern also spoke to me, it is, “In a time of drastic change it is the learners who inherit the future. The learned usually find themselves equipped to love in a world that no longer exists.”

When entering the computer science program three years ago, it was repeated time and time again that the material that I would learn in college will likely be outdated by the time I am entering the workforce. While this is simply a fact of the computer science field, I feel that I am doing well at keeping myself informed and appreciate the efforts by my educators in keeping my education relevant and valuable in a rapidly changing world.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.

Difference between Abstraction and Encapsulation

From the blog CS@Worcester – Computer Science Exploration by ioplay and used with permission of the author. All other rights reserved by the author.

11/27/2017 — Blog Assignment 10 CS 343

The blog post this week discusses my favorite principle in software development, the open and closed principle. The open and closed principle encourages independencies in software development. It states the following as outlined in the article: Class behavior should be open for extension but closed for modifications. This can be separated into two parts. As explained in the article, the first part is about the behavior of a class. Extensions of behavior allows for changes in behavior to be added, deleted or modified without affecting the other behaviors. This is what is meant to be independent. Thus, the 1st part of the statement signifies changes in behavior which signifies changes in source code. The second part of the statement states that class behavior should be closed for modifications. Is this a contradiction? Modifications means changes to source code, which the second statement encourages developers not to change. This means that class behavior should be independent and not need to be modified when other parts of its behavior are changed. So, when extending a class behaviors, the source codes for other behaviors should not be changed. The other definition introduced in the article is: Software entities(classes, modules, functions, etc.) should be open for extension, but closed for modifications. The explanation for the principle is as I have described previous just above that it encourages independence in software methods.

The article, however, expresses confusion for both definitions because the two sides of the principle is a contradiction, however I think it did a pretty good job in its explanations and examples. As explained in the article, the main idea behind the OCP is that the behavior of the system can change by replacing “entities that constitute it with alternative implementations,” but that the other behaviors can be independent and therefore need not be changed. The example given is that for calculating taxes. As stated, take for example the TaxCalculator interface for calculating taxes. In this system, if we replace the UsTaxCalculator with the UkTaxCalculator, this does not require modifications of existing logic, but that it provides for new implementations to an existing interface. Overall, the main idea is that you can add behaviors to systems be adding new code, but the existing ones do not need to be changed. This creates separability so that system behavior can be easily modified and changed.

Finally, I will close this post with a discussion about 3 rules of thumb suggested in the article that can help a developer to determine when to use the open closed principle. The first is to add extension points to classes that implement requirements that are intrinsically unstable due to instability of the system. This helps to clean the code which makes it easier for modifications of system behavior. The second is to not add more preliminary extension points. Finally, the third is to refactor parts of the code and to introduce additional extension points when the parts are unstable.

Overall I chose this article in order to compare my understanding of the OCP with another software developer. In my opinion, the OCP is based on indenpendence of methods, functions and classes. Although the article indirectly refers to this point, still I think it misses to introduce this point directly.

From the blog CS@Worcester – Site Title by myxuanonline and used with permission of the author. All other rights reserved by the author.

11/27/2017 — blog post 10 CS443

https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/
The last two blog posts on code review have discussed about suggestions for improvements, its overall advantages, and a general overview of what exactly it is, this weeks blog post on code review will focus on the statistics. In general, the post lists 11 proven practices based on experiments and surveys from experienced software developers that has helped team members improve code review abilities. Why is this important? Suggestions from experienced developers are based on experiences, but this post crunches up the numbers in case studies to demonstrate its effectiveness and to suggests 11 proven practices based on the statistics. This is why I chose this article this week in order to emphasize the importance of code reviews.
In my opinion, this article is one of the best I have seen so far for providing suggestions for code reviews and crunching out the numbers as well. As introduced, the suggestions from this post were compiled from code review studies and collections of lessons learned from over 6000 programmers from over 100 companies. The main issue with code review as suggested in the article is efficiency that is reviews often are too long to be practical. The conclusions from the article were from 2500 code reviews, 50 programmers and 3.2 lines of code at Cisco Systems. The study tracked the production teams with members in Bangalore, Budapest, and San Jose for 10 months.
The first suggestion is to review fewer than 200-400 lines of code at a time. The cisco code review study revealed that the optimal range for finding defects is 200-400 lines of code at a time. After that, the ability to find defects decreases. The statistics is that if 10 defects existed, it is likely that the team will find 7 to 9 of them. What is interesting is the graph shown in figure 1 of the defect density against the number of lines of code under review. According to the graph as the number of lines of code under review increases beyond 200, the defect density drops off considerably. So, the optimal effectiveness is from 200-400.
The second suggestion is to aim for inspection rates that are fewer than 300-500 LOC per hour. The inspection rate measures how fast the team is able to review codes. According to figure 2, the effectiveness of inspection falls off when more than 500 lines of codes are under review. Finally, the last interesting point is to never review code for more than 90 minutes. The article suggests that developers take enough time for a proper, slow review, but no more than 90 minutes. It is generally known from many case studies that code reviews after 60 minutes just diminishes in productivity, effort and concentration. Most reviewers simply gets tired and stop finding additional defects. Some will probably not be able to review more than 300-600 lines of codes before their performance drops. So, the rates of finding bugs deteriorates after 60 minutes. Overall, code reviews should not last more than 90 minutes.
I chose this article because the suggestions were too good to pass up a blog post. I find the first 3 suggestions to be helpful to keep in mind when code reviewing as effort always diminishes with time.

From the blog CS@Worcester – Site Title by myxuanonline and used with permission of the author. All other rights reserved by the author.

Continuous Development

Continuing on with the TestOps posts by the, well, awesome Awesome Testing blog, is Continuous Development. Which is actually very interesting as it was a large part of what was taught in my Software Process Management course last year, so it was an enjoyable surprise to see this as the next covered topic.

Generally speaking, Continuous Development is, according to Wikipedia, “the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with software development. ”

The first step is Continuous Integration and unit tests. After every single commit by a developer, the main branch app should be compiled and built, and then unit tests should b executed to give the quickest feedback possible. The post suggests using mutation testing, testing that adds random faults to your code to see how well your tests perform, to test the unit tests themselves to see how good they are. After that, the developer should be made aware of their commit changed overall code coverage statistics.

The next step is Continuous Delivery or Automated Deployment. One should do numerous test environment deployments to test the deployment process of the application as well. After this is testing higher level things, such as functionalities on the integration or API level. End to end testing is very expensive, resource-wise, and should be done sparingly.

After that is performance testing, using a testing environment as close to the production environment as possible. You want to see how the application handles heavy loads. And then is security testing, to make sure the application is as safe from being hacked as you can manage, and then the hardest step, exploratory testing. This is a manual exploration of the application, that takes a lot of time and resources. It should be done sparingly as well.

Overall, this was another nice intersection between software development and testing. It was also a good reminder of concepts I learned in the very recent past which I found very interesting at the time. The ability to streamline the process for a developer and to give them feedback as quickly as possible is incredibly important, its ability to foster greater productivity readily available. To create such, there a many useful tools out there for testers and developers alike. Its a very straightforward example of testing directly helping developers, which is nice to see.

Original Post: http://www.awesome-testing.com/2016/10/testops-3-continuous-testing.html

From the blog CS@Worcester – Fu's Faulty Functions by fymeri and used with permission of the author. All other rights reserved by the author.

AB Testing – Episode 5 by Brent Jenson and Allen Page.

In this week’s testing episode, Brent and Allen begin by addressing end-to-end automation testing. It seemed that the original purpose of automation testing was being bypassed. Automation testing is best suited for short tip test and regression checks. But by implementing dev. architecture in testing, we are able to create a more organized and more structural development environment. Brent continues by addressing an issue that happened at amazon while he was there. They didn’t seem to have enough testers because whenever an update was made, it was reverted back due to bugs and collisions with other programs that were later found. The reversion process caused developers to place program signals and interrupts that would be triggered when parts of the apps or project was breaking up. This ended up educating the team about the need and importance of more testers to be able to find bugs and faults in the programs and update. Project rollout and changes often have drastically changed on overall product quality in the eyes of the users. It is often overlooked that creating proper checkpoints in a program creates great barriers against loss of services since it would be triggered should there be an update that can affect the performances of the program. Teaching programmers the testing techniques forces them to refactor their codes and build it to withstand updates that can break it. Also they tend to write codes that can be easily tested for bugs and holes. This practice creates a unique optimization of cost, which creates very complex codes that are not easily tested using automation since outputs cannot be predicted. Another tool that was introduced in the podcast was automated gui testing. This is a testing feature that is often used by developers to build proper test cases and scenarios. Automated GUI testing increases testing efforts, speed up delivery time, and improve test coverage. This is the main reason why teams that adopt the agile testing methodologies and continuous integration practices continue to invest in automated testing tools that can be used to perform front-end testing. Implementing GUI testing becomes more complex as time progresses and is almost never a linear process. It is a demanding part of the development lifecycle that forces QA teams to dedicate a large amount of time to. To sum things up , The best automated testing tools will not only have strong record-and-replay capabilities and flexible testing frameworks but they help you cut down on testing times and increase the speed to delivery.

 

 

 

 

LINK

https://testingpodcast.com/?powerpress_pinw=4538-podcast

https://smartbear.com/learn/automated-testing/manual-vs-automated-gui-testing/

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.

What Is Project Valhalla?

In this blog post, Justin Albano explains what Project Valhalla is.

“Project Valhalla is an OpenJDK project started in 2014 and headed by Brian Goetz with the purpose of introducing value-based optimizations to Java Development Kit (JDK) 10 or a future Java release. The project is primarily focused on allowing developers to create and utilize value types, or non-reference values that act as though they are primitives. In the words of Goetz: Codes like a class, works like an int.”

“Project Valhalla has a very specific purpose: To cease the requirement that Java developers choose between performance and abstraction.”

What Are Value Types?

“Value types are groups of data whose immediate value is stored in memory, rather than a reference (or pointer) to the data.” Doing this means saving memory otherwise taken up by overhead data. “Taking data and directly placing its value into memory (rather than a reference) is called flattening and its benefits are more acutely demonstrated with arrays.” “In an array of value types, the values are directly placed into the array and are guaranteed to be in contiguous memory (which increases locality and, consequently, the chance of cache hits). This idea is illustrated in the figure below:

The benefits to using Value Types are listed by the author:

  • Reduced memory usage: There is no need for additional memory used to store object metadata.
  • Reduced indirection: Because objects are stored as reference types in Java, each time you access it, it first must be dereferenced, causing additional instructions to be executed.
  • Increased locality Using flattened value objects removes indirection, increasing likelihood values are adjacently stored in memory.

One of the major differences between reference types and value types:

“The identity of a reference type is intrinsically bound to the object while the identity of a value type is bound to its current state”

The reason I picked this resource is because I did not know about Project Valhalla and it seemed like an interesting article to learn about. It’s not quite ready to be released in JDK but it’s a useful addition to Java that increasing performance and saves memory. I feel the content of the post was interesting and informative. I learned the benefits of using Value Types versus using pointers and the improvements that have been made to Java. Value Types may soon be released in an upcoming JDK and I would like to know how to utilize them when saving memory is crucial.

Additional resource: Minimal Value Types article

The post What Is Project Valhalla? appeared first on code friendly.

From the blog CS@Worcester – code friendly by erik and used with permission of the author. All other rights reserved by the author.

Blog #6 – AngularJS for Absolute Beginners

https://medialoot.com/blog/angularjs-for-absolute-beginners/

For this week’s blog, I chose a tutorial titled “AngularJS for Absolute Beginners” by Jenn Coyle because our final project is going to be coded in Angular, so I think that finding different blogs of tutorials for Angular is a good idea to help familiarize myself with the language. I like this blog by Jenn Coyle because it’s a straight-forward approach to Angular and does a great job at making the tutorial easy to follow so that it doesn’t overwhelm the person reading it.

 

Coyle’s blog starts off with an important sentence “Let’s face it, writing web applications is hard.” I like that Coyle started off her blog with the sentence because she assures the reader (who is probably just starting to learn Angular) that if they think that writing web applications is hard, then that’s perfectly fine because it’s supposed to be. I also think this adds a sense of comfortability for the reader.

Coyle goes on to say that AngularJS eases the pain of battling to make a functional front-end. The blog makes clear the prerequisites you’ll need to learn Angular, which is just a basic understanding of HTML, CSS, and JavaScript.

Coyle’s next point is why Angular is helpful. Anyone who has written code for a front-end web app has probably written code that’s soaked with bad practices. She defines imperative programming which changes the state web application by creating a flow that appends new elements to a list, which can be seen as negative because it can hide important functionality. Other problems that Angular address is direct DOM manipulation, global scope, and lack of organization.

Because of these problems, a meaningful and reusable structure for application was created: AngularJS. AngularJS is a front-end JavaScript framework and provides us with an opinionated way to build a powerful front-end for applications. It is opinionated because it forces the user to follow a specific pattern.

Coyle explains the structure of Angular well: that it is built to isolate specific responsibilities of an application from each other. And is made of different components. Coyle also goes over the other basic structures of Angular like the directives, controllers, and views.

Coyle then goes on to giving a step by step instruction on how to create an AngularJS program.

  1. Create the module
  2. Initialize the View
  3. Create the controller
  4. Set up $scope
  5. Tell the view about the controller
  6. Bind data to the page
  7. Create view layout
  8. Set up data on the controller
  9. Bind the todos to the view
  10. Set up the template
  11. Finishing a todo
  12. Removing the todo in the controller
  13. Add a new todo
  14. Adding a todo to the controller

All of these steps come with code and an explanation which helps make everything clear. This and the thorough explanations that Coyle gave for everything in AngularJS is why I chose this blog as a good tutorial for beginners. It breaks everything up into an approachable process for anyone who wants to learn AngularJS.

From the blog CS@Worcester – Decode My Life by decodemylifeblog and used with permission of the author. All other rights reserved by the author.