Author Archives: Abranti3 Dada Kay

Clean Architecture – Components and Component Cohesion

Episode 71

Coding blocks podcast is presented by Joe Zack, Michael outlaw and Allen Underwood. In this podcast episode, they squad begin by talking about components cohesions in architectural designs. Cohesion in software designing refers to the degree to which the elements inside a module belong together. In one sense, it is a measure of the strength of relationship between the methods and data of a class and some unifying purpose or concept served by that class. In another sense, it is a measure of the strength of relationship between the class’s methods and data themselves. Cohesion is an ordinal type of measurement and is usually classified under two categories, “high cohesion” or “low cohesion”. According to the team, there was a principal known as the fish bowl principal, this was employed in system building and architectural design for so many years. It was believed that the fish would eventually grow to fit the bowl it was placed in. But that has changed over time, with services like AWS and other cloud functionality, developing software of any size is easily manageable. Scalability is often handled by high performance systems that allocate resources to where is it needed most and vise versa when it’s not in demand. A big part of this new trend of software reuse is propelled by the open source project that currently runs the software industry. Building components or software in components also propels this new trend of code reuse. This is because in components, code is built to be self-dependent and sufficient to run on its own. It is viewed as a module that fits a part of the big puzzle. Testing for modules and components do not break the original code as it is tested as a single entity that interacts with the overall project. Another topic that gets discussed by the group is the common closure principal. The common closure principle consists of classes that change for the same reason and at the same time. This is similar to the single responsibility principle. This simply means that if the character of the class is changing, then the component is also gonna change. We need to make sure that the component is only changing for one reason only and if there is more than one reason, then there should be more than one component. Overall, this episode went very in-depth to technical practices and techniques that are used to develop components and architecture in software creation. This level of in-depth was a little too much for what we are studying but I felt it was necessary as it gets us thinking about how to build software in components and parts and start learning how to allocate functionalities to individual components.

 

Link – Episode 71

https://player.fm/series/coding-blocks-software-and-web-programming-security-best-practices-microsoft-net

https://en.wikipedia.org/wiki/Cohesion_(computer_science)#frb-inline

 

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.

Episode 33 —Testing in Data Science

In this week’s testing podcast episode, Brian explores testing in data science with the famous Katharine Jarmul. Katharine is a expert in data science and machine learning. She mainly uses python to write unite tests for her projects. I picked this podcast because after listening to this, I learned more about how to put together testing teams, how to manage and direct traffic in a testing team and how to be the driving force for success in the team. According to her, no matter how much we know as a team, with each testing project, we need to bring together all our resources and ideas. Testing often goes out of the scope of what is considered the norm. This is because in testing, we normally try to find the boundaries and limits if products and software. As a teacher and owner of a consulting company, Katharine often spends her days developing testing strategies that requires the implementation of new and modern testing technologies such as Integrating QA through agility and TCoE , Higher Automation Levels with a focus on security and Context driven testing.

 

Integrating QA through agility and TCoE

Though agile development teams have been around for a long time, agility in testing is still nascent. With the continuous pressure to quickly deliver software, businesses are investing time and money into setting up a TCoE with the objective of reducing CoQ, increasing test effectiveness and generating more ROI out of testing. From 2011 to 2014, the number of operational TCoEs has increased from 4% to 19% and is expected to increase further in the future.

 

Higher Automation Levels with a focus on security

System robustness and security has always been a top priority but with growth in social media and mobility and need for software that can be integrated to multiple platforms, systems are becoming more vulnerable. There is a pressing need to ensure enhanced security, particularly in applications handling sensitive data. This is causing QA to focus more on security testing.

 

Context driven testing

The challenge for businesses to maintain central hubs of hardware, middleware and test environments necessary to comprehensively test them has caused context driven testing to become more popular as it ensures more testing coverage from diverse angles. It is expected that this will impact skill development among testers, as there will be more demand for testers with exposure to different contexts.

 

Sources

https://testingpodcast.com/33-katharine-jarmul-testing-in-data-science/

http://www.cigniti.com/blog/top-7-trends-in-software-testing/

 

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.

Clean Coding- Coding Blocks

Episode 49 – Clean Code – Comments Are Lies

 

 

Coding blocks podcast is presented by Joe Zack, Michael outlaw and Allen Underwood. In this podcast episode, the authors discuss about creating good and clean code and eliminating as much comments as possible. Initially, I was very confused with this concept by pro developers because in my first intro to java class, my teacher emphasized on making sure that we adequately commented thoroughly on methods and functions that we wrote. There were even points that was taken for now properly commenting codes then all of a sudden, my CS 443 my professor tells me that commenting is not really a good practice since your code should be written so well that understanding the though process and program should very easy. But the more I thought about this, the more I understood what was being taught by the teacher and now this podcast episode. No one writes comments for print statements because it’s so rudimentary that, everyone basically understands it by looking at it. That’s how our algorithms should be designs. Code Readability and understanding should be the goal of all developers who walk out of school. Again using comments in clean code has its pros and cons. They almost never get updated while the code gets updated and fix. They tend to mislead because they are not often updated. They propagate lies and misinformation’s because as the code gets modified and updated, they are often left untouched. The only exception to this rule of thumb is when one is coding a public API that would be used by other developers. Comments are looked as a way for programmers to make up for their shortcomings in programing. If methods and variables are named and designed properly there would be no need commenting. Time used to create comments can be used to optimize the software program to increase its readability and logic flow. Another bad thing about comments is when they are not obsolete but just misleading. Also inaccurate comments put the developer in the wrong frame of mind and logic. The proper approach is utilizing refactoring and clean code techniques that build program structure and design instead of attempting to explain bad coding with comments. Ultimately, it makes sense that developers wanted to explain their thoughts and processes with comment but its just more effective when the thought process is explained in the logic and functionality of the codes and method.

 

 

Link – Episode 49

https://player.fm/series/coding-blocks-software-and-web-programming-security-best-practices-microsoft-net/episode-49-clean-code-comments-are-lies

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.

#6 – Testing Talks

Testing Talks – Episode 169 With JeanAnn Harrison.

 

In this week’s testing podcast episode, Joe interviews JeanAnn, a Software testing manager who has been in the software quality assurance field for over 2 decades. JeanAnn begins by addressing techniques and best practices that make for a fluid testing process. I chose this particular episode because JeanAnn addressed automation in testing and critical thinking in testing. With the development of modern technology, software automation is the next big thing in the world today.

Another big thing she talked about was critical thinking. Critical thinking often refers to the ability to try different thought processes and develop new methodologies to achieve an already familiar goal. The ability to think outside the box, pushing yourself to look at things in a different way. To evolve ones thinking ability in testing, we can try to look at things outside the software-testing field. By doing this, one is able to develop logical means that are often necessary to find product boundaries and limits. Asking yourself questions like how? , when? , what ? , where ?. These simple questions often answers most of the questions for the software testers and help develop solutions and bugs that need to found. By asking these questions, you are able to see how the software or product could be integrated into computer systems, the kind of problems that can arise by implementing the new product/technology and what can be done to resolve issues should they arrive. Another big thing that she addressed was understanding software’s users and customer base. Developing apps and programs without expected audience leads to many problems in the software world. Imagine developing a mobile application for 70 year olds but its integrated into the latest iphone technology. This would not work out because most of the people in that range no longer have the ability to adapt to new technology or even use a cellular device. Again imagine developing a walker for blind people which has an activation switch installed on the side with an on and off reading. This will be physically challenging and difficult for the blind to optimally use the product. It might be the best product that all blind people needs but its inability to incorporate and account for the blind would instantly make it a bad design or a bad product to acquire. Simply put if you know your user base, you are able to find out what needs to be designed for the product to properly fit the needs of the users and customers.

 

 

LINK

 

https://testingpodcast.com/169-critical-thinking-in-testing-with-jeanann-harrison/

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.

#5– Episode 67

Episode 67 – Object Oriented Mistakes

Presented by Joe Zack, Michael outlaw and Allen Underwood, the group addressed many object-oriented mistakes that coders often make. I found this to be an interesting podcast because like many of us upcoming coders, we are taught mistakes that cause problems when attempting to develop in an object-oriented environment. From the previous design patterns episode, we were able to understand the best situations and the proper design pattern that should be used. For example when using domain designs, we have learned not to use domain anemic models. Before we continue, anemic model is the use of a software domain model where the domain objects contain little or no business logical validations, calculations, business rules etc. These are typically called bags of properties with getters and setter without any kind of behavioral type method. The domain model objects cannot therefore guarantee their correctness in any moment due to lack of validation and mutation logic which is usually placed somewhere outside the class being addressed. One would ask, why is this considered an anti-pattern in todays programming world, well first of all, it disrupts the concept of object-oriented design. This design contracts what is implemented when on opts to use the object oriented design. Object orientation allows objects to have states and sessions but anemic causes stateless objects. It is great for just a simple application because there is a clear separation between logic and data unlike object oriented programing. Simple applications also do not require a ton of logic to be implement. They do not need methods with behavioral code in them. After this point, Micheal spoke of another anti-pattern. This pattern was known as the base bean anti-pattern. This is when you inherit functionality from a utility class instead of delegating to that utility class. The issue associated with this practice is that defeats the purpose of inheritance. Inheritance is used for the wrong reasons. The main purpose of inheritance is to create a hierarchical order in which code information can be managed and understood. Wrong implementation of inheritance disrupts object properties and class properties. Overall, design patterns are created for incoming developers and even intermediate developers to have a guide that can be used as a reference. Thy provide us with a way to getting things resolved or better yet, the first solution that solved the problem. It is up to every developer to learn beyond that and recognize how they can utilize the already laid out platform to their own coding needs and development.

 

 

 

Source

https://player.fm/series/coding-blocks-software-and-web-programming-security-best-practices-microsoft-net/episode-67-object-oriented-mistakes

https://en.wikipedia.org/wiki/Anemic_domain_model

 

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.

AB Testing – Episode 10 by Brent Jenson and Allen Page.

Episode 10 by Brent Jenson and Allen Page.

In this week’s testing podcast episode, Brent and Allen begin by discussing changes that Microsoft made during the absence of Allen in the testing department. Microsoft decided to shift their focus to a quality assurance based system instead of emphasizing on a testing based platform. Although these concepts are closely related in the field of software testing, they each specifically represent something different. Software testing is usually the process of executing a system with the intent of finding defects. By process of testing, we are able to assess or evaluate the capabilities or attributes of a software program’s ability to adequately meet the applicable standards and customer needs but quality assurance on the other end refers to a set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives. These new implementations by Microsoft forced engineering managers to move between teams and successfully understand the overall product specifications that need to be accomplished. After this talk, they moved over to duties and abilities of a manager the drives a QA team into either being successful or a failure. Inadequate technical skills and bad manners often lead to the demise of a team according to Brent. He believes that a manager that falls into this category only knows how to “demand more from the sausage crank”. This not only damages the crank but also shifts the focus of the operation. Increasing the fire under the meat provides more desired result. The fire is resource and help that is needed to get the task accomplished. The sausage is the software and the crank is the team members. To be a good testing and QA manager, you have to task the team members with things you would do yourself if you were the worker. Understanding that consistency and confidence in you by your employees pushes the team momentum forward. Another big factor is that product quality directly relates to the team adequacy. The best products more likely than not comes from the best QA and testing teams. Understanding that the well being of a team affects the product they are able to produce is one of the first steps to succeeding in creating a great QA and testing team. Good organizational health has to be the focus of every organization that supplies products and services to users.

 

 

 

Sources

https://testingpodcast.com/category/ab-testing/page/7/

http://www.mosaicinc.com/mosaicinc/rmThisMonth.asp

 

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.

AB Testing – Episode 5 by Brent Jenson and Allen Page.

In this week’s testing episode, Brent and Allen begin by addressing end-to-end automation testing. It seemed that the original purpose of automation testing was being bypassed. Automation testing is best suited for short tip test and regression checks. But by implementing dev. architecture in testing, we are able to create a more organized and more structural development environment. Brent continues by addressing an issue that happened at amazon while he was there. They didn’t seem to have enough testers because whenever an update was made, it was reverted back due to bugs and collisions with other programs that were later found. The reversion process caused developers to place program signals and interrupts that would be triggered when parts of the apps or project was breaking up. This ended up educating the team about the need and importance of more testers to be able to find bugs and faults in the programs and update. Project rollout and changes often have drastically changed on overall product quality in the eyes of the users. It is often overlooked that creating proper checkpoints in a program creates great barriers against loss of services since it would be triggered should there be an update that can affect the performances of the program. Teaching programmers the testing techniques forces them to refactor their codes and build it to withstand updates that can break it. Also they tend to write codes that can be easily tested for bugs and holes. This practice creates a unique optimization of cost, which creates very complex codes that are not easily tested using automation since outputs cannot be predicted. Another tool that was introduced in the podcast was automated gui testing. This is a testing feature that is often used by developers to build proper test cases and scenarios. Automated GUI testing increases testing efforts, speed up delivery time, and improve test coverage. This is the main reason why teams that adopt the agile testing methodologies and continuous integration practices continue to invest in automated testing tools that can be used to perform front-end testing. Implementing GUI testing becomes more complex as time progresses and is almost never a linear process. It is a demanding part of the development lifecycle that forces QA teams to dedicate a large amount of time to. To sum things up , The best automated testing tools will not only have strong record-and-replay capabilities and flexible testing frameworks but they help you cut down on testing times and increase the speed to delivery.

 

 

 

 

LINK

https://testingpodcast.com/?powerpress_pinw=4538-podcast

https://smartbear.com/learn/automated-testing/manual-vs-automated-gui-testing/

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.

Algorithms, Puzzles and the Technical Interview- Episode 29.

Coding blocks podcast is presented by Joe Zack, Michael outlaw and Allen Underwood. In this episode, the squad discusses details and understanding of algorithms while addressing problem solving puzzles that are often required for one to make it through a technical interview. The first topic that Michael talked about was staying on top of your coding skills and understanding the latest implementations and trends in the industry. He then recommended TopCode.com. It’s a resource that host coding competitions and there is often a prize incentive for the winner. I think this is a great idea because we can all testify that the less you code, the more your skills become obsolete and sloppy. No only would recommending a site like that help coders sharpen their syntax and best practices, it also creates great portfolio references and helps build connections that could play a huge role in allowing a fellow coder to further his or her skills. Later in the podcast they began talking about the latest update to their angular project, which includes angular 2.0. Angular 2.0 is built on typescript. Allen initially talked about his frustration with typescript since it seemed to just translate what needed to be done in another language but actually he addressed some important features of typescript. It is backwards compatible and enables you to do immerse closures and constructor type things in it. He also addressed the similarities between typescript and object oriented programing languages like java or C-sharp. Another resource that was mentioned was code Academy. They advised this site if you are a developer that wants to learn a new programing language or pick up a new programing skill for free. Now after many side talks, the question was asked, what an algorithm is. They defined an algorithm as a set of instructions and procedures that gets a task completed, while defining a program as a set of lines of instructions that are run to complete the task. The program is the implementation of the algorithm. They also defined a design pattern as a collection and organizational workflow that helps organizes code and makes it easy to maintain overtime. Finally they talked about how you can prepare for a technical interview with a potential employer as a developer. Knowing your basic algorithms and how they can be implemented serves as a great way to prepare for an interview. It is a known fact that software algorithms remain the same but they are just re implemented in different ways.

 

 

 

 

Link – Episode 29

https://player.fm/series/coding-blocks-software-and-web-programming-security-best-practices-microsoft-net/episode-26-algorithms-puzzles-and-the-technical-interview

 

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.

AB Testing – Episode 3

 

 

-In this weeks testing episode, Brent and Allen brings new light to the careers in software development with a focus on testing. It seemed that most graduates have developing as their first option and when they cannot do that, they want to fall back to software program management and if all fails, they settle for being a tester. The fact of the matter is that people don’t value the career of testing. But if you are able to be a good tester, you build the skills required to make a great developer. Usually testers have to manager programs and create tests around that schedule. Again students who study big data learn great and valuable skills that are applied in the field of testing. Things such as reading maps, graphs, making analysis and analyzing data input and outputs are all skills that helps make one a great tester.

So why aren’t many students becoming software tester?

Brent and Allen took a survey from a selected number of student and they found out that schools are not teaching testing classes. This could be the reason why many students don’t find value in studying software testing. It is often believe that testing is embedded in software development but the sooner we demystify this the better!. It is been proven that writing tests strategies and plans before writing code often help speed up the software building process. This is because knowing what you code is suppose to do makes it easier to create code to do what its suppose to do. Now how can

 

We use metrics and Data analysis to improve testing?

Data analysis often provides detailed information about page load time, memory usage charts and load balancing metrics. This data allows a tester to identify potential bugs and issues that need to be addressed before the release of a software product. By managing and properly observing the metrics that matter, we are able to produce better data that directly affect the performance of the software. By implementing proper metric techniques, companies and software companies are able to relate marketing to performance and user feedback. By use of metric, Amazon is able to know how much a delay of 1 millisecond affects them in annual revenue. By getting information such as these, companies are able to proper manager their markets and are able to know what their customers expect of them.

 

 

LINK

https://testingpodcast.com/?powerpress_pinw=4534-podcast

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.

Coding Blocks – Source Control- Episode 3

 

 

In this episode of coding blocks Allen Underwood, Michael Outlaw, Joe Zack addressed an issue in software construction that often makes or break a project/Team. Source control and management. I personally found this topic very crucial because of a programming project I was able to contribute to with a couple of my friends. Initially, we were just using Google drive to update and track project chances but as many newer version were created, it became a mess to try and track which update did what and how stable is that version. We eventually resulted to utilizing Gitlab. It was here that I found the importance of source control team working. We were able to section of parts of the project and distribute among ourselves also, it was easy to modify and make changes because we always know a Standard version of the project existed should we break the original pull. In this podcast episode, the authors made known of another major reason why implementing source control was effective. By implementing source control, many branches be worked on at the same time. This way, problems and bugs can be resolved and fixed faster. Also construction was made a little easy as people could work independently on building various parts of the software. Again by pushing after every working build, the programmer is able to leave a stable version with an attached commit message which help the next person to touch the code understand what that build accomplishes or does.

Best Practice Tip: Ensure that you only push back working code that passed the compilation test.

Another topic the authors addressed was the issue of missing path that often occurs with source control software development. They made emphasis that having a consistent naming convention is recommended for best practices because some programs and software requires you to switch between operating system and since that means different file systems, having a standard naming convention for packages and file paths makes it easier on who ever works on the program to make edits and changes as needed.

         While addressing source control, another major topic to talk about is pull requests. It serves as another layer of verification and “code review”. Having push requests allows you to submit to a specific branch and get you work evaluated before pushing back to the main repository. This way leaders and managers can verify that code written is correct and fits the required standards and specifications.

LINK

https://player.fm/series/coding-blocks-software-and-web-programming-security-best-practices-microsoft-net/episode-3-source-control-etiquette

 

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.