Category Archives: Week 1

Simplify, Simplify, Simplify

“Coding with Clarity”

Software developer Brandon Gregory’s two-part post in the blog “A List Apart”, describes key concepts every software designer should follow in order to make the functions in our programs not only functional, but also efficient and professionally designed.

The first piece of advice Gregory shares with us is to maintain what he calls the “single responsibility principle”. In other words, each function should only focus on one process. By following this rule, we not only limit the margin of error in our code, but also increase the flexibility of them. Having simple functions makes the program easier to read, modify, and detect errors.

The next concept the author illustrates Gregory described as “command-query separation”. This means that a function should either carry out some work, or answer a question and return some data. The key advice is to not combine the two types. A function should either perform an action or answer a question, never both. If a program needed to both change data and return information, it would be better to write two separate functions to handle the tasks.

Finally, Gregory delves into the details of “loose coupling” and “high cohesion” in code. What he means by “loose coupling” is that each subsection of a program should be as independent as possible, not relying on the other parts of the code to inform a function. Similarly, the author advises us to stick to “high cohesion”, meaning that each object in a program should only contain data and functions that are relevant to that object.

Personally, I very much agree with and appreciate Gregory’s perspective and advice on writing clean code. In his advice, he very effectively clarified a lot of the concepts that I’ve trouble with implementing in my previous programs. One of my favorite lines in this post was “Large functions are where classes go to hide.” I found this quote very useful because it helps solidify the process of abstraction. From this point on, if I’m writing a function that becomes too unruly or long or complicated, I will ask myself “would this large function be better off as an independent class?” I definitely learned a lot about good practices in simplifying functions, and I will reflect on from now on when developing programs.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.

Differences in Integration Testing

Source: http://www.satisfice.com/blog/archives/1570

The blog post Re-Inventing Testing: What is Integration Testing? by James Bach gives an interview-like approach to exploring Integration Testing. It starts with a simple question of “What is integration testing?” and goes off from there. As the respondent answers, James leaves a descriptive analysis of the answer and what he is thinking at that point in time. The objective of the interview is to test both his knowledge and interviewee.

This was an interesting read as it seems related to a topic from a previous course that discussed coupling between which showed the degree of interdependence between software modules, which is why I chose this blog post. What I found interesting about this blog post would be his chosen interviewee was a student. So that entire conversation can be viewed from a teacher and student perspective. This is useful because it allows me to see how a professional would like an answer to be crafted and presented in a clear manner. For example, the interviewee’s initial answer is text-book like which prompted James to press for details.

Some useful information about integration testing is also provided because of this conversation. Integration testing is used during an interaction between multiple software are combined and tested together in a group. In this blog post, it is noted that not all levels of integration are the same. Sometimes “weak” forms of integrations exist, an example provided from the blog would be, when a system creates a file for another system to read it. There is a slight connection between the two systems due to them interacting with the same file. But as they are independent systems, neither of the two systems knows that the other exists.

From this blog post, I can tell that any testing requires much more than textbook knowledge on the subject. As mentioned briefly in the blog, there are risks involved with integrating two independent systems together and there is a certain amount of communication between the two systems. Depending on the amount of communication determines the level of integration between the two. The stronger the communication is between the two systems means that they are highly dependent on one another to execute certain functions.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

B1: Levels Of Testing

https://blog.testlodge.com/levels-of-testing/

          I found an interesting blog post this week that talked about Levels of Testing within Software Engineering. It explained that there were four main levels that were known as Unit Testing, Integration Testing, System Testing, and Acceptance Testing. The post used a basic diagram to help explain the ideas alongside explanations for each level. It explained that Unit Testing is usually done by developers of the code and involves testing small individual modules of code to make sure that each part works. Integration testing is explained as taking individual modules like in Unit Testing and combining them to see if they work together as a “section” of modules. The post explains System Testing as the first test that works with the entire application. This level has multiple tests that works through application scenarios from the beginning to the end. It is used for verification of multiple requirements within the software such as the technical, functional, and business aspects. Acceptance Testing is defined as the final level of testing which determines if the software can be released or not. It makes sure that the requirements set by a business are completed to make sure that the client gets what they want.

          I chose this article because I remember that the syllabus only states three main levels, excluding Acceptance Testing. This is what sparked my initial curiosity as to what these levels of testing were and what Acceptance Testing was. I found that this content was really interesting because it explains how testing can be structured into different levels with each level building off of the last one. I enjoyed how the post explained the levels sequentially while also explain how they interact with one another. I was able to grasp an understanding of these software testing levels while also understanding the importance and vital role that testing plays within the development process. The most interesting part of the blog post was the Acceptance Testing because it reminds the reader that in almost every scenario of software development there is always going to be changes to the requirements of the original project. This level builds off that idea and essentially allows the developers to make sure that the product they are working on meets the flexible criteria of a client. I found that the diagram didn’t make sense when I first looked at it before reading the post. However, as I understood the subject more, I found it to be a great source that summarized and simplified the detailed ideas within the post.

 

From the blog CS@Worcester – Student To Scholar by kumarcomputerscience and used with permission of the author. All other rights reserved by the author.

Positive Testing

https://www.softwaretestinghelp.com/positive-testing/#more-37401

This article explains the technique of positive testing. Positive testing is one method of testing software to make sure that it is working correctly. With this technique a valid set of test data is used to check if an application is functioning as per the specification document. For example, if you are testing a textbox that can only accept numbers, a positive test would be entering numbers into the box. This is in contrast to negative testing, which would be entering something besides numbers to see if the application breaks. The goal of this is to see if an application works as desired when it is being used correctly. For this reason, positive testing should be the first step of test execution.

There are two techniques discussed that are used in positive testing. With boundary value analysis, a tester needs to create test data which falls within a data range. With equivalence partitioning, the test input is divided into equal partitions and values from each partition are used as test data.

I thought this article was very interesting because positive testing is something that I have always done as the first step in testing without realizing there is a name for it. I think it is natural to see if your application works correctly before you test for bugs by trying to break it. The part that I found most useful was the section on equivalence partitioning. I usually just try random inputs when I am testing something, but it makes a lot more sense to divide the possible inputs into equal partitions so that data from each partition can be tested.

This article will not significantly change how I work because I have always used positive testing as the first step. Although now I will make sure to use equivalence partitioning when I am testing. I am glad I read this article because now I have a good understanding of what positive testing is and I will not be confused if I run into the term in the future. Reading this article has changed the way I think about testing because now I understand that there are different testing types and techniques that should be followed correctly to make sure that a piece of software is stable and bug-free.

From the blog CS@Worcester – Computer Science Blog by rydercsblog and used with permission of the author. All other rights reserved by the author.

The Future of Performance Anaylitics

In the future, data analytics are going to be invested in a lot heavier due to the shear amount of information certain companies will need to collect and maintain. This issue is one that not only needs to be solved, but it also needs to have it’s issues prefaced before progression – which is what is hindering it.

This article from centraljersey.com talks about the rapid speeds needed to meet deadlines for “high demand” analytical solutions. It goes into how certain markets are investing in analytical technologies in order to predict the future thus being able to optimally market services. However, the article states that three main factors are causing a great hindrance to this push. These factors are security, privacy, and error prone databases. Not only do these kinds of methods take time, they also need to be secure. Not only to protect mass amounts of data, but to operate as efficiently as possible.

Upon reading this article, what interested me is that North-America accounts for the largest market share due to the growing numbers of “players” in the region. Per the article, a lot of this is being invested for cloud-based solutions. What I found interesting, however, is that this company (Market Research Future), provides research to their clients. They have many dedicated teams devoted to specific fields, which is why they can craft their research very carefully. What I find useful about this posting is that it shows just how important the future of data analytics and organization can be. With the future of data collection, there will need to be more, optimized solutions to handle and control these types of research data.

The content of this posting confirms my beliefs on how cloud computing and cloud based data analysis will continue to grow and evolve rapidly over the coming years. With more and more companies migrating to cloud based systems, not only for internal means, but for client needs as well, we will see a great push in optimized data sorting and faster data transfer. Expansion in cloud computing and web-based services will become the main staple of future products such as this.

 

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.

Strategies on Securing your Software

“Hacker, Hack Thyself”

Stack Overflow co-founder and author of the blog “Coding Horror” Jeff Atwood writes in this post about his experience trying to secure his open source project “Discourse” from security threats. Atwood discusses the hashing algorithms they use to protect their database and users’ data, as well as the strategies they used to test the strength of their cyber security and password policies.

To test their designs, the developers attempted to hack into their own software, and track the estimated time it takes their systems to crack passwords with varying lengths. They did this by creating various passwords on the servers, starting from the most simple allowable strings of digits, increasing the length of the passwords, and moving on to more complex passwords with words and numbers combined. What they found was the passwords that combined case-sensitive letters and digits would take up to three years to crack.

By cracking the hash functions of these passwords, and recording the amount of time it took to do so, the developers had meaningful data that informed them of their software’s resilience to security threats, and presumably would have a significant effect on their password policies and development of future hash algorithms, if needed.

I found Atwood’s post both interesting and informative. It was interesting to see the strategies the developers used to protect their database from what Atwood describes as a “A brute force try-every-single-letter-and-number attack”. Still I was surprised to see how much of a difference in time it took them to crack the simple passwords compared to the complex ones.

On the technical side of things, I scratched the surface on a lot of important concepts in this post that I would love to learn more about. For instance, Atwood goes into some detail about the proper complexity and number of iterations that should go into a solid hash function. That type of knowledge is extremely valuable in developing secure programs.

Atwood concludes his post expressing a better understanding of specifically what type of attacks his software is strong and vulnerable against. I definitely agree with Atwood’s proactive philosophy about cyber security, and I believe that kind of reasoning is instrumental to being a successful software developer.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.

Journey into Software C.D.A. a SOLID Explanation

As I take a step towards my journey in Software C.D.A. I am told that for my first task I must find a blog related to the class topic. The blog I chose was “The Solid Principles of Object Oriented Design” by Joseph Smith. I chose this blog cause it’s about one of the Object Orientated Design Principle. A topic that was supposed to be cover on the first day of class, but was not covered since we ran out of time. This blog was short simple and direct to the topic, hence why I chose it. The content in the blog is about the S.O.L.I.D. Principle of Object Oriented Design. I will give a summary of what the blog was about and what it explained from my point f view and understanding. It goes as follows:

There is about five Object Oriented Design Principles known as SOLID. This SOLID principle is used by Software Developers to help them successfully develop applications that are clearly workable.

SOLID stands for the following:

S)  SRP – Single Responsibility Principle.

  • This means that a class only really need one duty, and only one motive to change.

O)  OCP – Open Closed Principle

  • This means that a class needs to be open for extension, and closed for any changes.

L)  LSP – Lisko Substitution Principle

  • This means that the child class must allow the parent class to be interchangeable with it. Meaning the child class can inherit from the parent class by utilizing a copy of all or sum of its objects and change what’s within the object to what relates to the child class.

I)  ISP – Interface Segregation Principle

  • This means a class is not required to use any methods it does not need. This is possible through Interface Segregation by taking the larger interface and splitting them into smaller ones, all the way until a class implementation of the interface. Once that happens it only will have relevant methods.

D)  DIP – Dependency Inversion Principle

  • This means that high-level of modules are independent of low-level of modules, but both are dependent of abstraction. However, abstraction is not dependent on details and vice versa.

 

Then the blog finish by saying it will continue with more in-depth information of these principles the next few weeks. It also suggests we go on Wikipedia if we seek more information or detail on the subject so, I did. Well the reason mainly being that I was unclear on the Dependency Inversion Principle. I had to go on Wikipedia to see how it’s explained on that site and if I could understand the concept better. Okay, let’s just say I did so the definition I place on the DIP section was obtained from Wikipedia and not the blog itself. Now, other than that part of the blog everything else was understandable and explained the principle. I found the link to the Wikipedia page useful. I like how the Arthur only focused on the SOLID principle because it’s a pretty big subject and very easy to get lost and confused. This blog has taken me on the right directing to understanding one of the subjects related to the class Software Construction, Design, and Architecture (CS-343). I honestly liked this blog and it made me realized I know some of the topic since it has been covered in past CS classes. I am eager to learn more as it will help perfect my skills.

This has been YessyMer in the World Of Computer Science, thank you for your time until next time.

From the blog cs@Worcester – YessyMer In the world of Computer Science by yesmercedes and used with permission of the author. All other rights reserved by the author.

PATTERN : Sustainable Motivations

Sustainable Motivations

The author opens this apprenticeship pattern by addressing all the intangibles that are often overlooked in the programming world. He talks about the challenges we, as developers encounter in our career. He addresses the issues of horrendous real-world projects that are often rigorous,  tedious and exhausting. It can grow from  frustrating at times to morphing into overly chaotic or constraining issues that are backed by a business man who only knows what the current trend demands. All through this, the author urges us to hold firm and ensure that our drive for mastery propels us to withstand the situation.

Personal Reflection

I was fortunate enough to be taking the CS-348 class so i got to witness the dynamics of a software development environment through one of our in class simulations and there i realized that the constant specification changes by the business man often can lead to stressful and frustration environment to work in but it is here that the author tell us ground our motivations to the walking the long road pattern. In that pattern, we are though to continue taking on task that build and molds our skills. So in the mist of all the chaos, we are expected to find a related source of interest in programming that will continue to carry us when the going gets real tough.

I personally feel like this is the hardest pattern to master because normally, programming is challenging so the only thing that keeps us going at it is our passion for coding/ developing software. Now should that passion be attacked, we have no more source of interest. But the author tells us to persist even when we have lost drive and find a secondary source that can fuel us through the tough time until our original passion returns. I do agree that it does get to a stage that being able to provide for you family comes into the equation so this rules out switching of area or quitting in generally and money often serves as the secondary drive that can propel us until we get our initial vision back. The life of a programmer is filled with many adventures, learning slopes and curveballs but finding joy in programming amidst the bad times deepens the love and passion to be great !.

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.

Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman

The Introduction to the textbook sets the goals for the book and the targeted audience.

“One of our goals for this book is to inspire the people who love to create software to stay focused on their craft.”

“This book is written entirely for you—not for your boss, not for your team leader, not for your professor.”

What Is Software Craftsmanship?
The book considers software craftsmanship as some overlapping values, listed here are those values with my thoughts on them.

  • Growth Mindset, the belief that you can always improve and find faster, easier ways to do things if you are prepared to work. Failure should be seen as an incentive to change your approach. “Effort is what makes you smart or talented” – Carol Dweck. I think the growth mindset it very important to quality and success and avoiding burn-out.
  • Constantly adapting to constructive criticism and learning to improve areas where you are lacking. This idea is essential to ensuring quality work and keeping up-to-date on the best practices.
  • A desire to be pragmatic rather than dogmatic. This important trade-off allows you to get things done and avoid spending too much time making things “perfect”. This ties in well with YAGNI.
  • A belief that information is better shared than hoarded. Everyone benefits from shared knowledge which is the basis for the idea of Open Source projects. This allows a greater population to improve shared code and learn how things work.
  • Ability and willingness to experiment and not worry about failing, because you are able to learn from the experience.
  • Taking control of and responsibility for our destinies rather than waiting for someone else to give us the answer. I think this is basically saying to take initiative, or try to think differently to solve a problem.
  • Focusing on individuals rather than groups. This book is meant to help you as an individual improve your skills.
  • A commitment to inclusiveness. I think this is a good rule in everyday life but works with being a craftsman.
  • We are skill-centric rather than process-centric. It’s more important to be highly skilled than using the “right” process. I think it boils down to it pays to have the knowledge to use your skills in any situation, not having to rely on a template of tools.
  • Situated Learning, the idea of learning a skill around those using those skills. Working your first job as a software developer would be a good example of this.
  • Based on the introduction, this text appears to have very useful information for someone who wants to improve the quality of their work and what they contribute. The book includes good lessons that can apply to any aspect of life. It stresses the idea of improving skills by being open to learning from others, learn from mistakes, and never stop improving.

    The post Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman appeared first on code friendly.

    From the blog CS@Worcester – code friendly by erik and used with permission of the author. All other rights reserved by the author.

    Apprenticeship Patterns Chapter 1

    From the blog CS@Worcester – BenLag's Blog by benlagblog and used with permission of the author. All other rights reserved by the author.