Category Archives: Week 1

Apprenticeship Pattern – Be The Worst

The apprenticeship pattern “Be The Worst” refers to the situation where you’ve outgrown your team and possibly your entire development organization and now you’re no longer learning at a rate that is acceptable to you. The solution to this is the find another group made up of developers where you are the weakest member so you have more room to grow. The quote used in the article to represent this is “Be the lion’s tail rather than the fox’s head!” Another quote that emphasizes this idea is “If you’re the smartest person in the room, you’re in the wrong room.”

I think that while it’s true that being surrounded by developers who are smarter or more experienced than you will definitely accelerate your growth or learning, if everyone followed this idea then the “junior” team members wouldn’t have anyone above their experience level to learn from because they would have already moved on to greener pastures. That being said, everyone doesn’t follow this pattern. There are likely many experienced developers who are happy with the level that they’re at and have no interest in switching teams giving less experienced members of their team a mentor to learn from.

Reading this pattern hasn’t really changed my mindset, as it bears some similarities how I had already thought about my intended profession, albeit a little more extreme. While I agree with the pattern as it refers to switching teams when you feel like you’re no longer growing, I feel that it can potentially be an unhealthy mindset to have. I think a better mindset would to start looking for new opportunities once you’re no longer satisfied with your work. The wording here is very similar to the original pattern, however it also takes into account those who enjoy their jobs, even if they might not be learning as fast as they could be had they joined a new team.

Overall, while I’d say that following this pattern would likely accelerate your growth, I don’t think it’s the be-all and end-all solution to a successful and rewarding career. Being the “smartest” person in the room doesn’t mean you have nothing to learn, especially when it comes to learning how to be a good mentor.


From the blog CS@Worcester – Andy Pham by apham1 and used with permission of the author. All other rights reserved by the author.

Software Capstone Introduction

This semester, I begin my software development Capstone, where I get to work with my peers on a semester long project. I will be making posts and reporting on my progress periodically throughout the semester on this blog, so keep up if you’re interested.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.

Hexagonal Architecture

Today I am going to discuss one of the software architectures: Hexagonal Architecture. Its purpose is to reduce the amount of time we need to maintain and modify the code, in order to improve the maintainability. The more we increase the maintainability, the less work is required to achieve the tasks. This software architecture is not called “Hexagonal” for no reason. It is actually represented by a hexagon which is very flexible, and it allows you to make changes anytime, because of the independent layers. Each side of the hexagon has an input, an output, and a domain model. The three components of a Hexagonal Architecture are Domain Model, Ports, and Adapters.


Since the Domain is placed in the middle of the hexagon, it makes the Domain the center layer of it, which works independently in the architecture. Also, the Domain Model is used to maintain all the business data and the rules related to that data.

Port is the way to get to the business logic, or in other words, it serves as an entry point. There exist primary and secondary ports. The primary ports are functions that allow you to make changes, and they get called by the primary adapters. The Secondary ports are the interfaces created for the secondary adapters, but other than the primary ports they get called by the core logic.

Adapter serves as a bridge to connect the application and the maintenance that is needed for this application. A primary adapter is an essential adapter which connects the user and the core logic through a piece of code. It might be a unit test for the core logic. A secondary adapter is an implementation of the secondary port (interface).

I found this article interesting because the writer knows who the audience is and explains everything in detail. Also, the topic is related to our CS-343 class and this might be a good start to get into the world of software architecture. Nowadays, we are looking for simplicity and flexibility and this is what Hexagonal Architecture is about. According to the article that I chose the benefits of a Hexagonal Architecture are:

– Agnostic to the outside world

– Independent from external services

– Easier to test in isolation

– Replaceable Ports and Adapters

– High Maintainability

Apparently, Hexagonal Architecture makes the work easier and more efficient, based on this article:

Thank you for taking the time to read my blog!




From the blog CS@Worcester – Gloris's Blog by Gloris Pina and used with permission of the author. All other rights reserved by the author.

The Surprisingly Complicated Problem of Programming a World Clock

In computer science, there are a lot of things we would normally take for granted, but it might not be so easy to program. Take time, for instance. If we wanted to create an app to calculate how many seconds ago a time and date was, it seems straightforward. Of course we would account things like leap days, but as I discovered, it quickly becomes a lot more complicated than that.

As Tom Scott explains in an episode of the YouTube Channel, “Computerphile,” a simple-sounding app like this can be jarringly complicated. For starters, to create this app for a worldwide audience, it seems reasonable to make it available in all time zones.

To adjust it for each time zone seems straightforward enough. All you have to do is take the Greenwich Mean Time and add or subtract hours based on the time zone that the user is in, right? Not quite.

Take daylight savings time, for instance. It varies from country to country when daylight savings starts, and then there are some states/countries that don’t go on daylight savings time. This is just the tip of the iceberg of how complicated it gets.

There are times when countries skip days when they cross the international time zone. Countries can often switch time zones several times, even within the same year. Some countries can be in the same area but in separate time zones, such  as Israelis and Pakistanis. Historically, we haven’t alway been on the same calendar. Russia only switched to the Gregorian calendar in the twentieth century, complicating matters even more. There are even more confounding historical absurdities, such as the year used to start on the 25th of March, bizarrely.

There is even a such thing as leap second, but leap seconds don’t exist in astronomical time, and the differentiation is important because of how they manufacture telescopes and such.

This video revealed just how easy it is to take a problem and not realize how much more vastly complicated it can be and to appreciate the people who came before me and worked out this absurdly complicated problem.

I found it interesting because it was a problem I never thought about before. It seems like an easy enough problem until you find out how complicated it really is. Like I say, it is easy to take something like this for granted, and this makes me appreciate how seamlessly most technology that uses programmable clocks runs. Now that I can understand how vast and complicated the problem is, I can appreciate those who came before me and did the hard work to get it right.

From the blog Sam Bryan by and used with permission of the author. All other rights reserved by the author.

Simplify, Simplify, Simplify

“Coding with Clarity”

Software developer Brandon Gregory’s two-part post in the blog “A List Apart”, describes key concepts every software designer should follow in order to make the functions in our programs not only functional, but also efficient and professionally designed.

The first piece of advice Gregory shares with us is to maintain what he calls the “single responsibility principle”. In other words, each function should only focus on one process. By following this rule, we not only limit the margin of error in our code, but also increase the flexibility of them. Having simple functions makes the program easier to read, modify, and detect errors.

The next concept the author illustrates Gregory described as “command-query separation”. This means that a function should either carry out some work, or answer a question and return some data. The key advice is to not combine the two types. A function should either perform an action or answer a question, never both. If a program needed to both change data and return information, it would be better to write two separate functions to handle the tasks.

Finally, Gregory delves into the details of “loose coupling” and “high cohesion” in code. What he means by “loose coupling” is that each subsection of a program should be as independent as possible, not relying on the other parts of the code to inform a function. Similarly, the author advises us to stick to “high cohesion”, meaning that each object in a program should only contain data and functions that are relevant to that object.

Personally, I very much agree with and appreciate Gregory’s perspective and advice on writing clean code. In his advice, he very effectively clarified a lot of the concepts that I’ve trouble with implementing in my previous programs. One of my favorite lines in this post was “Large functions are where classes go to hide.” I found this quote very useful because it helps solidify the process of abstraction. From this point on, if I’m writing a function that becomes too unruly or long or complicated, I will ask myself “would this large function be better off as an independent class?” I definitely learned a lot about good practices in simplifying functions, and I will reflect on from now on when developing programs.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.

Differences in Integration Testing


The blog post Re-Inventing Testing: What is Integration Testing? by James Bach gives an interview-like approach to exploring Integration Testing. It starts with a simple question of “What is integration testing?” and goes off from there. As the respondent answers, James leaves a descriptive analysis of the answer and what he is thinking at that point in time. The objective of the interview is to test both his knowledge and interviewee.

This was an interesting read as it seems related to a topic from a previous course that discussed coupling between which showed the degree of interdependence between software modules, which is why I chose this blog post. What I found interesting about this blog post would be his chosen interviewee was a student. So that entire conversation can be viewed from a teacher and student perspective. This is useful because it allows me to see how a professional would like an answer to be crafted and presented in a clear manner. For example, the interviewee’s initial answer is text-book like which prompted James to press for details.

Some useful information about integration testing is also provided because of this conversation. Integration testing is used during an interaction between multiple software are combined and tested together in a group. In this blog post, it is noted that not all levels of integration are the same. Sometimes “weak” forms of integrations exist, an example provided from the blog would be, when a system creates a file for another system to read it. There is a slight connection between the two systems due to them interacting with the same file. But as they are independent systems, neither of the two systems knows that the other exists.

From this blog post, I can tell that any testing requires much more than textbook knowledge on the subject. As mentioned briefly in the blog, there are risks involved with integrating two independent systems together and there is a certain amount of communication between the two systems. Depending on the amount of communication determines the level of integration between the two. The stronger the communication is between the two systems means that they are highly dependent on one another to execute certain functions.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

B1: Levels Of Testing

          I found an interesting blog post this week that talked about Levels of Testing within Software Engineering. It explained that there were four main levels that were known as Unit Testing, Integration Testing, System Testing, and Acceptance Testing. The post used a basic diagram to help explain the ideas alongside explanations for each level. It explained that Unit Testing is usually done by developers of the code and involves testing small individual modules of code to make sure that each part works. Integration testing is explained as taking individual modules like in Unit Testing and combining them to see if they work together as a “section” of modules. The post explains System Testing as the first test that works with the entire application. This level has multiple tests that works through application scenarios from the beginning to the end. It is used for verification of multiple requirements within the software such as the technical, functional, and business aspects. Acceptance Testing is defined as the final level of testing which determines if the software can be released or not. It makes sure that the requirements set by a business are completed to make sure that the client gets what they want.

          I chose this article because I remember that the syllabus only states three main levels, excluding Acceptance Testing. This is what sparked my initial curiosity as to what these levels of testing were and what Acceptance Testing was. I found that this content was really interesting because it explains how testing can be structured into different levels with each level building off of the last one. I enjoyed how the post explained the levels sequentially while also explain how they interact with one another. I was able to grasp an understanding of these software testing levels while also understanding the importance and vital role that testing plays within the development process. The most interesting part of the blog post was the Acceptance Testing because it reminds the reader that in almost every scenario of software development there is always going to be changes to the requirements of the original project. This level builds off that idea and essentially allows the developers to make sure that the product they are working on meets the flexible criteria of a client. I found that the diagram didn’t make sense when I first looked at it before reading the post. However, as I understood the subject more, I found it to be a great source that summarized and simplified the detailed ideas within the post.


From the blog CS@Worcester – Student To Scholar by kumarcomputerscience and used with permission of the author. All other rights reserved by the author.

Positive Testing

This article explains the technique of positive testing. Positive testing is one method of testing software to make sure that it is working correctly. With this technique a valid set of test data is used to check if an application is functioning as per the specification document. For example, if you are testing a textbox that can only accept numbers, a positive test would be entering numbers into the box. This is in contrast to negative testing, which would be entering something besides numbers to see if the application breaks. The goal of this is to see if an application works as desired when it is being used correctly. For this reason, positive testing should be the first step of test execution.

There are two techniques discussed that are used in positive testing. With boundary value analysis, a tester needs to create test data which falls within a data range. With equivalence partitioning, the test input is divided into equal partitions and values from each partition are used as test data.

I thought this article was very interesting because positive testing is something that I have always done as the first step in testing without realizing there is a name for it. I think it is natural to see if your application works correctly before you test for bugs by trying to break it. The part that I found most useful was the section on equivalence partitioning. I usually just try random inputs when I am testing something, but it makes a lot more sense to divide the possible inputs into equal partitions so that data from each partition can be tested.

This article will not significantly change how I work because I have always used positive testing as the first step. Although now I will make sure to use equivalence partitioning when I am testing. I am glad I read this article because now I have a good understanding of what positive testing is and I will not be confused if I run into the term in the future. Reading this article has changed the way I think about testing because now I understand that there are different testing types and techniques that should be followed correctly to make sure that a piece of software is stable and bug-free.

From the blog CS@Worcester – Computer Science Blog by rydercsblog and used with permission of the author. All other rights reserved by the author.

The Future of Performance Anaylitics

In the future, data analytics are going to be invested in a lot heavier due to the shear amount of information certain companies will need to collect and maintain. This issue is one that not only needs to be solved, but it also needs to have it’s issues prefaced before progression – which is what is hindering it.

This article from talks about the rapid speeds needed to meet deadlines for “high demand” analytical solutions. It goes into how certain markets are investing in analytical technologies in order to predict the future thus being able to optimally market services. However, the article states that three main factors are causing a great hindrance to this push. These factors are security, privacy, and error prone databases. Not only do these kinds of methods take time, they also need to be secure. Not only to protect mass amounts of data, but to operate as efficiently as possible.

Upon reading this article, what interested me is that North-America accounts for the largest market share due to the growing numbers of “players” in the region. Per the article, a lot of this is being invested for cloud-based solutions. What I found interesting, however, is that this company (Market Research Future), provides research to their clients. They have many dedicated teams devoted to specific fields, which is why they can craft their research very carefully. What I find useful about this posting is that it shows just how important the future of data analytics and organization can be. With the future of data collection, there will need to be more, optimized solutions to handle and control these types of research data.

The content of this posting confirms my beliefs on how cloud computing and cloud based data analysis will continue to grow and evolve rapidly over the coming years. With more and more companies migrating to cloud based systems, not only for internal means, but for client needs as well, we will see a great push in optimized data sorting and faster data transfer. Expansion in cloud computing and web-based services will become the main staple of future products such as this.


From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.

Strategies on Securing your Software

“Hacker, Hack Thyself”

Stack Overflow co-founder and author of the blog “Coding Horror” Jeff Atwood writes in this post about his experience trying to secure his open source project “Discourse” from security threats. Atwood discusses the hashing algorithms they use to protect their database and users’ data, as well as the strategies they used to test the strength of their cyber security and password policies.

To test their designs, the developers attempted to hack into their own software, and track the estimated time it takes their systems to crack passwords with varying lengths. They did this by creating various passwords on the servers, starting from the most simple allowable strings of digits, increasing the length of the passwords, and moving on to more complex passwords with words and numbers combined. What they found was the passwords that combined case-sensitive letters and digits would take up to three years to crack.

By cracking the hash functions of these passwords, and recording the amount of time it took to do so, the developers had meaningful data that informed them of their software’s resilience to security threats, and presumably would have a significant effect on their password policies and development of future hash algorithms, if needed.

I found Atwood’s post both interesting and informative. It was interesting to see the strategies the developers used to protect their database from what Atwood describes as a “A brute force try-every-single-letter-and-number attack”. Still I was surprised to see how much of a difference in time it took them to crack the simple passwords compared to the complex ones.

On the technical side of things, I scratched the surface on a lot of important concepts in this post that I would love to learn more about. For instance, Atwood goes into some detail about the proper complexity and number of iterations that should go into a solid hash function. That type of knowledge is extremely valuable in developing secure programs.

Atwood concludes his post expressing a better understanding of specifically what type of attacks his software is strong and vulnerable against. I definitely agree with Atwood’s proactive philosophy about cyber security, and I believe that kind of reasoning is instrumental to being a successful software developer.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.