Category Archives: Week 10

11/27/2017 — blog post 10 CS443

https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/
The last two blog posts on code review have discussed about suggestions for improvements, its overall advantages, and a general overview of what exactly it is, this weeks blog post on code review will focus on the statistics. In general, the post lists 11 proven practices based on experiments and surveys from experienced software developers that has helped team members improve code review abilities. Why is this important? Suggestions from experienced developers are based on experiences, but this post crunches up the numbers in case studies to demonstrate its effectiveness and to suggests 11 proven practices based on the statistics. This is why I chose this article this week in order to emphasize the importance of code reviews.
In my opinion, this article is one of the best I have seen so far for providing suggestions for code reviews and crunching out the numbers as well. As introduced, the suggestions from this post were compiled from code review studies and collections of lessons learned from over 6000 programmers from over 100 companies. The main issue with code review as suggested in the article is efficiency that is reviews often are too long to be practical. The conclusions from the article were from 2500 code reviews, 50 programmers and 3.2 lines of code at Cisco Systems. The study tracked the production teams with members in Bangalore, Budapest, and San Jose for 10 months.
The first suggestion is to review fewer than 200-400 lines of code at a time. The cisco code review study revealed that the optimal range for finding defects is 200-400 lines of code at a time. After that, the ability to find defects decreases. The statistics is that if 10 defects existed, it is likely that the team will find 7 to 9 of them. What is interesting is the graph shown in figure 1 of the defect density against the number of lines of code under review. According to the graph as the number of lines of code under review increases beyond 200, the defect density drops off considerably. So, the optimal effectiveness is from 200-400.
The second suggestion is to aim for inspection rates that are fewer than 300-500 LOC per hour. The inspection rate measures how fast the team is able to review codes. According to figure 2, the effectiveness of inspection falls off when more than 500 lines of codes are under review. Finally, the last interesting point is to never review code for more than 90 minutes. The article suggests that developers take enough time for a proper, slow review, but no more than 90 minutes. It is generally known from many case studies that code reviews after 60 minutes just diminishes in productivity, effort and concentration. Most reviewers simply gets tired and stop finding additional defects. Some will probably not be able to review more than 300-600 lines of codes before their performance drops. So, the rates of finding bugs deteriorates after 60 minutes. Overall, code reviews should not last more than 90 minutes.
I chose this article because the suggestions were too good to pass up a blog post. I find the first 3 suggestions to be helpful to keep in mind when code reviewing as effort always diminishes with time.

From the blog CS@Worcester – Site Title by myxuanonline and used with permission of the author. All other rights reserved by the author.

Continuous Development

Continuing on with the TestOps posts by the, well, awesome Awesome Testing blog, is Continuous Development. Which is actually very interesting as it was a large part of what was taught in my Software Process Management course last year, so it was an enjoyable surprise to see this as the next covered topic.

Generally speaking, Continuous Development is, according to Wikipedia, “the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with software development. ”

The first step is Continuous Integration and unit tests. After every single commit by a developer, the main branch app should be compiled and built, and then unit tests should b executed to give the quickest feedback possible. The post suggests using mutation testing, testing that adds random faults to your code to see how well your tests perform, to test the unit tests themselves to see how good they are. After that, the developer should be made aware of their commit changed overall code coverage statistics.

The next step is Continuous Delivery or Automated Deployment. One should do numerous test environment deployments to test the deployment process of the application as well. After this is testing higher level things, such as functionalities on the integration or API level. End to end testing is very expensive, resource-wise, and should be done sparingly.

After that is performance testing, using a testing environment as close to the production environment as possible. You want to see how the application handles heavy loads. And then is security testing, to make sure the application is as safe from being hacked as you can manage, and then the hardest step, exploratory testing. This is a manual exploration of the application, that takes a lot of time and resources. It should be done sparingly as well.

Overall, this was another nice intersection between software development and testing. It was also a good reminder of concepts I learned in the very recent past which I found very interesting at the time. The ability to streamline the process for a developer and to give them feedback as quickly as possible is incredibly important, its ability to foster greater productivity readily available. To create such, there a many useful tools out there for testers and developers alike. Its a very straightforward example of testing directly helping developers, which is nice to see.

Original Post: http://www.awesome-testing.com/2016/10/testops-3-continuous-testing.html

From the blog CS@Worcester – Fu's Faulty Functions by fymeri and used with permission of the author. All other rights reserved by the author.

AB Testing – Episode 5 by Brent Jenson and Allen Page.

In this week’s testing episode, Brent and Allen begin by addressing end-to-end automation testing. It seemed that the original purpose of automation testing was being bypassed. Automation testing is best suited for short tip test and regression checks. But by implementing dev. architecture in testing, we are able to create a more organized and more structural development environment. Brent continues by addressing an issue that happened at amazon while he was there. They didn’t seem to have enough testers because whenever an update was made, it was reverted back due to bugs and collisions with other programs that were later found. The reversion process caused developers to place program signals and interrupts that would be triggered when parts of the apps or project was breaking up. This ended up educating the team about the need and importance of more testers to be able to find bugs and faults in the programs and update. Project rollout and changes often have drastically changed on overall product quality in the eyes of the users. It is often overlooked that creating proper checkpoints in a program creates great barriers against loss of services since it would be triggered should there be an update that can affect the performances of the program. Teaching programmers the testing techniques forces them to refactor their codes and build it to withstand updates that can break it. Also they tend to write codes that can be easily tested for bugs and holes. This practice creates a unique optimization of cost, which creates very complex codes that are not easily tested using automation since outputs cannot be predicted. Another tool that was introduced in the podcast was automated gui testing. This is a testing feature that is often used by developers to build proper test cases and scenarios. Automated GUI testing increases testing efforts, speed up delivery time, and improve test coverage. This is the main reason why teams that adopt the agile testing methodologies and continuous integration practices continue to invest in automated testing tools that can be used to perform front-end testing. Implementing GUI testing becomes more complex as time progresses and is almost never a linear process. It is a demanding part of the development lifecycle that forces QA teams to dedicate a large amount of time to. To sum things up , The best automated testing tools will not only have strong record-and-replay capabilities and flexible testing frameworks but they help you cut down on testing times and increase the speed to delivery.

 

 

 

 

LINK

https://testingpodcast.com/?powerpress_pinw=4538-podcast

https://smartbear.com/learn/automated-testing/manual-vs-automated-gui-testing/

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.

What Is Project Valhalla?

In this blog post, Justin Albano explains what Project Valhalla is.

“Project Valhalla is an OpenJDK project started in 2014 and headed by Brian Goetz with the purpose of introducing value-based optimizations to Java Development Kit (JDK) 10 or a future Java release. The project is primarily focused on allowing developers to create and utilize value types, or non-reference values that act as though they are primitives. In the words of Goetz: Codes like a class, works like an int.”

“Project Valhalla has a very specific purpose: To cease the requirement that Java developers choose between performance and abstraction.”

What Are Value Types?

“Value types are groups of data whose immediate value is stored in memory, rather than a reference (or pointer) to the data.” Doing this means saving memory otherwise taken up by overhead data. “Taking data and directly placing its value into memory (rather than a reference) is called flattening and its benefits are more acutely demonstrated with arrays.” “In an array of value types, the values are directly placed into the array and are guaranteed to be in contiguous memory (which increases locality and, consequently, the chance of cache hits). This idea is illustrated in the figure below:

The benefits to using Value Types are listed by the author:

  • Reduced memory usage: There is no need for additional memory used to store object metadata.
  • Reduced indirection: Because objects are stored as reference types in Java, each time you access it, it first must be dereferenced, causing additional instructions to be executed.
  • Increased locality Using flattened value objects removes indirection, increasing likelihood values are adjacently stored in memory.

One of the major differences between reference types and value types:

“The identity of a reference type is intrinsically bound to the object while the identity of a value type is bound to its current state”

The reason I picked this resource is because I did not know about Project Valhalla and it seemed like an interesting article to learn about. It’s not quite ready to be released in JDK but it’s a useful addition to Java that increasing performance and saves memory. I feel the content of the post was interesting and informative. I learned the benefits of using Value Types versus using pointers and the improvements that have been made to Java. Value Types may soon be released in an upcoming JDK and I would like to know how to utilize them when saving memory is crucial.

Additional resource: Minimal Value Types article

The post What Is Project Valhalla? appeared first on code friendly.

From the blog CS@Worcester – code friendly by erik and used with permission of the author. All other rights reserved by the author.

Blog #6 – AngularJS for Absolute Beginners

https://medialoot.com/blog/angularjs-for-absolute-beginners/

For this week’s blog, I chose a tutorial titled “AngularJS for Absolute Beginners” by Jenn Coyle because our final project is going to be coded in Angular, so I think that finding different blogs of tutorials for Angular is a good idea to help familiarize myself with the language. I like this blog by Jenn Coyle because it’s a straight-forward approach to Angular and does a great job at making the tutorial easy to follow so that it doesn’t overwhelm the person reading it.

 

Coyle’s blog starts off with an important sentence “Let’s face it, writing web applications is hard.” I like that Coyle started off her blog with the sentence because she assures the reader (who is probably just starting to learn Angular) that if they think that writing web applications is hard, then that’s perfectly fine because it’s supposed to be. I also think this adds a sense of comfortability for the reader.

Coyle goes on to say that AngularJS eases the pain of battling to make a functional front-end. The blog makes clear the prerequisites you’ll need to learn Angular, which is just a basic understanding of HTML, CSS, and JavaScript.

Coyle’s next point is why Angular is helpful. Anyone who has written code for a front-end web app has probably written code that’s soaked with bad practices. She defines imperative programming which changes the state web application by creating a flow that appends new elements to a list, which can be seen as negative because it can hide important functionality. Other problems that Angular address is direct DOM manipulation, global scope, and lack of organization.

Because of these problems, a meaningful and reusable structure for application was created: AngularJS. AngularJS is a front-end JavaScript framework and provides us with an opinionated way to build a powerful front-end for applications. It is opinionated because it forces the user to follow a specific pattern.

Coyle explains the structure of Angular well: that it is built to isolate specific responsibilities of an application from each other. And is made of different components. Coyle also goes over the other basic structures of Angular like the directives, controllers, and views.

Coyle then goes on to giving a step by step instruction on how to create an AngularJS program.

  1. Create the module
  2. Initialize the View
  3. Create the controller
  4. Set up $scope
  5. Tell the view about the controller
  6. Bind data to the page
  7. Create view layout
  8. Set up data on the controller
  9. Bind the todos to the view
  10. Set up the template
  11. Finishing a todo
  12. Removing the todo in the controller
  13. Add a new todo
  14. Adding a todo to the controller

All of these steps come with code and an explanation which helps make everything clear. This and the thorough explanations that Coyle gave for everything in AngularJS is why I chose this blog as a good tutorial for beginners. It breaks everything up into an approachable process for anyone who wants to learn AngularJS.

From the blog CS@Worcester – Decode My Life by decodemylifeblog and used with permission of the author. All other rights reserved by the author.

Angular and The Future of Web Apps

Currently in class we are learning about the tools and functionality of Angular. Angular is a JavaScript based open source web application framework. It is currently being maintained by Google and some other developers. Angular is open source which attracts many users and continually increases the framework’s growth in popularity. Recently, we have seen just this. Developers are choosing Angular because of its great framework and because it enables Progressive Web Applications. Progressive Web Apps (PWAs) is a term that is being universally accepted by developers across the world. It is a way of creating the best web and mobile apps and taking advantage of the most recent technologies in order to make for more efficient and fast web apps. Google has been leading the initiative for Progressive Web Apps by dedicating their design philosophy behind and distributing public data and toolkits in order to help people get started on their web apps as well. I chose this article because I think it is a great example of how Angular is a great framework to work with because it is forward thinking relative to Progressive Web Applications. It will also help further my understanding of Angular and allows me to see the greater benefits of the framework.

Developers want their apps to be more efficient and they also want them to be scalable. This goes for both desktop and mobile apps. By creating web apps that are user friendly  and provide the scalability that allows multiple types of users to interact with it, you can create a very successful platform because it gives users a reason to keep using your app. Angular is great because it is an open-source framework that has a ton of support for it.

Angular has gone through many great changes that improve its functionality, sustainability, and reliability. These are also the main keys of a Progressive Web App, which is why a lot of developers tend to like it. The first version of AngularJS released in 2012. Until this time, no one has ever really seen a web app infrastructure that was this reliable and easy to understand. Angular has the ability to reduce boilerplate and also greatly improved code testability. Angular then make a great leap in 2014 with the team’s announcement of Angular 2. Angular 2 was the newest version of Angular and it was written with Microsoft’s superset of JavaScript and TypeSript. As you can imagine, these language are two very popular and easy languages. Angular 2 was also focused on being more compact and extremely fast.

As we can see, Angular 2 is becoming the fastest growing environment for web app development upon many developers. This affects me personally because Angular is a great tool to utilize when designing and planning out Web Applications. I also learned that the future of Progressive Web Applications is quickly evolving and how Angular 2’s infrastructure is a great resource to consider. In the future, I hope to expand my experience with Angular and wish to apply that knowledge towards the development of reliable and easy to use web apps.

 

Source: https://jaxenter.com/angular-progressive-web-apps-2018-139076.html

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.

Algorithms, Puzzles and the Technical Interview- Episode 29.

Coding blocks podcast is presented by Joe Zack, Michael outlaw and Allen Underwood. In this episode, the squad discusses details and understanding of algorithms while addressing problem solving puzzles that are often required for one to make it through a technical interview. The first topic that Michael talked about was staying on top of your coding skills and understanding the latest implementations and trends in the industry. He then recommended TopCode.com. It’s a resource that host coding competitions and there is often a prize incentive for the winner. I think this is a great idea because we can all testify that the less you code, the more your skills become obsolete and sloppy. No only would recommending a site like that help coders sharpen their syntax and best practices, it also creates great portfolio references and helps build connections that could play a huge role in allowing a fellow coder to further his or her skills. Later in the podcast they began talking about the latest update to their angular project, which includes angular 2.0. Angular 2.0 is built on typescript. Allen initially talked about his frustration with typescript since it seemed to just translate what needed to be done in another language but actually he addressed some important features of typescript. It is backwards compatible and enables you to do immerse closures and constructor type things in it. He also addressed the similarities between typescript and object oriented programing languages like java or C-sharp. Another resource that was mentioned was code Academy. They advised this site if you are a developer that wants to learn a new programing language or pick up a new programing skill for free. Now after many side talks, the question was asked, what an algorithm is. They defined an algorithm as a set of instructions and procedures that gets a task completed, while defining a program as a set of lines of instructions that are run to complete the task. The program is the implementation of the algorithm. They also defined a design pattern as a collection and organizational workflow that helps organizes code and makes it easy to maintain overtime. Finally they talked about how you can prepare for a technical interview with a potential employer as a developer. Knowing your basic algorithms and how they can be implemented serves as a great way to prepare for an interview. It is a known fact that software algorithms remain the same but they are just re implemented in different ways.

 

 

 

 

Link – Episode 29

https://player.fm/series/coding-blocks-software-and-web-programming-security-best-practices-microsoft-net/episode-26-algorithms-puzzles-and-the-technical-interview

 

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.

Post # 13

While researching AngularJS for the final project, I came across a really useful blog post written by Todd Motto, owner of Ultimate Angular, entitled “Ultimate guide to learning AngularJS in one day”.  I was intrigued by this post because it contains a section solely devoted to defining the terminology that is commonly-used in AngularJS development, which I found to be incredibly helpful.  In this blog post, I will reiterate the definitions of the most important terms, in hopes that it will aid me in the development of my team’s application.

The article begins by defining what exactly AngularJS is and what it is used for.   Motto defines AngularJS as a “client-side MVC/MVVM framework built in JavaScript, essential for modern single page web applications (and even websites)”.  MVC is short for Model-View-Controller, and MVCs are used in many programming languages as a means of structuring software.  Model refers to the data structure behind a specific portion of an application, usually ported in JSON; View refers to the HTML and/or rendered output of an application.  By using an MVC, you’ll pull down Model data which updates your View and displays the relevant data in the HTML; Controller refers to a mechanism, within an application, that provides direct access from the server to the view so that data can be updated on the fly via communication between the server and client(s).

Motto then explains how to set up an AngularJS project with the bare essentials.  The essential elements that make up an AngularJS application are a definition, controllers, and binding and inclusion of AngularJS within an HTML file.

Controllers, as defined by Motto, are the direct access points between the server and view that are used to update data on-the-fly .  The HTML of an AngularJS application should contain little to no physical text or hard coded values – this is because all of that data should be pushed into the view from a controller.  Web-applications should be as dynamic as possible and, by pushing values to the view from a controller in the back-end, we can achieve this.  Motto then emphasizes that Controllers are to be used for data only, and creating functions that are used in communication between the server and JSON.

Directives, as defined by Motto, are small pieces of templated HTML that should be used multiple times throughout an application’s development.  Directives are the easiest way to push data into the view.  Directives consist of a list of properties, including: restrict (restriction of element’s usage), replace (replaces markup in view that defines directive), transclude, template (allows declaration of markup to be injected into the view), templateURL (similar to template but kept in its own file).

Services, as defined by Motto, are stylistic design patterns.  Services are used for singletons, and Factories are used for functions.  Filters are used in conjunction with arrays to loop through data and filter specific findings.

Two-way data-binding, is a full-circle of synhronized data; update the Model and it updates the View, update the View and it updates the Model.  This implies that data is kept in sync without issue.

A lot of the results of my research into AngularJS, at first, were of no use because I had little understanding of the terminology and concepts described in them.  I believe that this post gave me a good understanding of the fundamental concepts of AngularJS and I now feel more confident as I continue development of my own application.  I will likely refer back to this article as I make progress in my project and, inevitably, conduct more research.

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

Code Review: What is it and Why is it Important?

Link to blog: http://thinkapps.com/blog/development/what-is-code-review/

In this blog written by Dario Macchi, Macchi explains what is code review and why it necessary to do. He identifies what it is, its purpose, What is peer review, what do peer reviewers look for, what is an external review, what do external reviewers look for, and a few scenarios on what should code reviewers do if something goes wrong or if something is missed within the the code review process.

Code Review: is systematic examination … of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers’ skills.”

Purpose: to validate the design and implementation of features within the code. Macchi identifies that there are two levels of code Review. These are the peer review and the external review levels.

Peer Review:  focused on functionality, design, implementation, and usefulness of proposed fixes for stated problems. Macchi explains why it is neccessary to perform a peer review within his company because in his company, they expect developers to talk to each other about their design intentions and receive feedback throughout the design and implementation process. Macchi’s real life experience gives us an example about the working field of software development and testing and how peer review is necessary. Peer Reviewers look for feature completion, potential side effects, readability and maintenance, consistency, performance, exception handling, simplicity, the reuse of existing code, and test cases.

External Review: addresses different issues and focuses on how to increase code quality, promote the best practices, and remove “code smells” or poorly written code. This review process looks at the quality of the code itself and its effects on the overall project. External reviewers look for readability and maintenance, coding style, and code smells.

A scenario that Macchi illustrates is “What is an external reviewer misses something?” The answer he gives is that we do not expect the external reviewer to make everything perfect. There is always something that would be missed. In this case, it is better to have more than one external reviewer. Another set of eyes always helps.

I chose this blog because I wanted to know the right process on reviewing code. I also chose this blog because it relates to my Software Quality Assurance and Testing Class since I was given an assignment to review code with my group of classmates which emulates the peer and external review process and emulates the real life workplace. Macchi definitely outlines the aspects of code review very well when he identified the two levels which were peer review and external review. Knowing code reviews will definitely help me apply myself in the future to many software development and testing jobs as well as my video game development career because there is always a 100% chance that I will review code with a team of other programmers no matter where I work at, especially when it comes to creating video gaming software.

 

From the blog CS@Worcester – Ricky Phan by Ricky Phan CS Worcester and used with permission of the author. All other rights reserved by the author.

Code Coverage

Code Coverage

“Did my tests cover all the code?” – this would be a question that absolutely popped up in testers’ mind often when they were writing their tests. Code coverage could answer this question for them. Code coverage helped testers understand how much of their code was tested. Since I had not used any code coverage tools before, I thought that it would be a good idea to start learning about them through an introduction post. Below was the URL for the post.

https://www.atlassian.com/continuous-delivery/introduction-to-code-coverage

In this post, Sten Pittet, who had been in the software business for ten years in various roles from development to product management, introduced the definition of code coverage, the common metrics in the coverage reports, a tip to choose the right tool for different projects. He also discussed about what percentage of coverage that testers should aim for. Moreover, he thought that testers should focus on unit testing first, use coverage reports to identify critical misses in testing, make code coverage part of their continuous integration flow.

Sten mentioned a few metrics that testers should pay attention to when reading coverage reports They were function coverage, statement coverage, branches coverage, condition coverage, line coverage. Function coverage showed how many of the functions defined had been called. Statement coverage showed how many of statements in the program had been executed. Branches coverage showed how many of the Boolean sub-expressions had been tested for a true and a false value. Line coverage showed how many of lines of source code had been tested. The example given in the post helped me understand the terms easier.

In Sten’s opinion, 80% code coverage was a good goal to aim for. Trying to reach higher coverage might turn out to be costly, while not necessary producing enough benefit. He also said that it was normal to have low coverage for the first run, and testers should not feel pressure to reach 80% coverage right away. To be honest, I really surprised that he only recommended 80% coverage. But when I thought harder, it made sense that getting higher coverage might be costlier, less benefit since real-life projects were usually bigger than school projects. He also highlighted that testers should write tests based on the business requirements of the applications rather than write tests that hitting every line of the code.

Furthermore, Sten emphasized that good coverage did not equal good tests. Code coverage tools could help testers understand where they should focus next, but they would not tell if their existing tests were robust enough for unexpected behaviors. Therefore, beside achieving great coverage, testers should have a good robust test suite and verify the integrity of the system. I agreed with him. Looking at his example, I could see clearly how bad it was if we only relied on the tools to write tests. Beside information about code coverage, I could apply his advice whenever I wrote tests.

 

From the blog CS@Worcester – Learn More Everyday by ziyuan1582 and used with permission of the author. All other rights reserved by the author.