Author Archives: ageorge4756

Sprint #2 Retrospective

Gitlab Deliverables:

  • Issue Board Maintenance => Throughout Sprint #2 I was responsible for creating and tracking tasks within our group. Additionally, I reviewed the syllabus and noticed that we missed a Backlog Refinement session during our first Sprint, so I allocated two dates (one for each remaining Sprint) for Backlog Refinement sessions. I led the discussion in reshaping our scope and adding/removing tasks in accordance to that scope.
  • Secrets/Docker Compose.yaml Research => This task was inspired by a component of Docker Compose I researched very late into our first Sprint. I completed my research and created documentation on this feature once I was able to implement it in a testing docker container. The main goal of implementing this was to securely hide the certificates we’d use (and mention) in our Docker Compose file. Unfortunately, we pivoted away from the self-signed certificate in favor of using domain, which completely discontinued the use of this research
    • Link to Documentation:

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/deployment/documentation/-/blob/main/Documentation/Secrets.md?ref_type=heads

Compared to Sprint #1, Sprint #2 has felt like much more substantial progress has been made. During the beginning of the Sprint, we pivoted our approach from using a self-signed certificate to using a domain to create a secure connection. With this newly acquired domain, we acquired a certificate that allowed us to securely connect to the FrontEnd’s services. I was unfamiliar with the process of applying for the certificate and renewing it at the time, but our team discussed how this would work which helped me learn new aspects of networking. Just before we acquired our domain, I finished my research on Docker Compose Secrets, which at the time I thought would work perfectly with our self-signed approach. Ultimately this work will go unused as we soon ditched the self-signed certificates in favor of using the domain. I decided to keep the documentation, as future teams may need this information for this project or to learn more about Docker Compose. 

Beyond that set of documentation, I would say my most successful contribution to this Sprint was setting up the Reverse Proxy to the FrontEnd service. On the surface the Reverse Proxy was only implemented by using ~3 lines of code however, I spent a couple of hours learning what a Reverse Proxy was and which ports were being occupied by the GuestInfoSystem. Looking back at the time I spent implementing this I wish I made two specific changes to my approach. First, I wish I recorded which ports were being occupied. During my first 30 minutes of testing the reverse proxy, I was able to have a running FrontEnd connected to our domain, at the time I didn’t think this was the desired result so I soon spent another hour or two trying to see what other ports were occupied (and which services they led to). This oversight of not recording port information would not have only saved me time but could have helped provide a visual to the team illustrating what port we would need to access for a specific service. The second change to my process I wish I changed was not creating documentation. Looking back at the time I spent learning what reverse proxies were, I wish I set aside a small list of my findings. Once I had the reverse proxy working, I gave my team a quick walkthrough, but future teams might need this information. To resolve this I plan on adding a section to the Onboarding documentation, which I will write during Sprint #3, discussing the reverse proxy I created and how to find/change it.

I’m continuing to learn and apply the apprenticeship pattern “Be the Worst” from Sprint #1, but I’ve begun to focus on another pattern “Create Feedback Loops”. One thing that I pride our team on is communication among peers and shared knowledge. If we find something new such as a new approach to solving a problem or an interesting bit of research, we do our best to share it when we meet during the beginning of class. By contributing to our group’s wealth of knowledge it creates a feedback loop that not only encourages other team members to find new information but also reinforces what you have learned individually. This apprenticeship pattern encourages the developer to take a metric and see how much of that they, individually, provide to their team when compared to other team members. Our feedback loop helps keep me on my toes, as when I don’t feel like I’ve contributed to our knowledge (or by extension progress), I develop an internal drive to take on something out of my comfort zone. During this Sprint I chose to learn about Reverse Proxies to contribute to this feedback loop. Not only would my findings be applied, but I could then further encourage and enable the work of my peers. 

The last note thing I would like to note is our team’s performance over this Sprint. I have learned a lot by tackling the issue of connecting our FrontEnd service as a group, but moving into Sprint #3 I want to see how more effective I can become in tackling individual issues. During our class meetings, we typically focus on one goal and research it together. I’ve learned that I have a hard time keeping up when new information and approaches are being tested. The best way I can word this struggle is the sense of there being ‘too many cooks in the kitchen’, so moving into Sprint #3 I want to refocus my effort on my personal work. In doing so, I hope I can become a much more effective team member.

-AG

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.

Sprint #1: Retrospective

Gitlab Deliverables:

Sprint #1 was a rocky path, but through our group’s collective efforts, we all learned new tools and skills we can implement in future sprints. At the beginning of this sprint, our group agreed that I would be the Scrum Master and, by extension, manage our GitLab activity. In hindsight, this was a perfect designation as I’ve had managerial experience in the past and continue to be a very organizationally driven worker. With these traits, I sought to make our workspace as clear and accommodating as possible, which would provide a strong foundation for our team to begin working. One area I could have improved upon is making sure everyone’s voice is heard. During Sprint #1, my confidence with Linux started at a 2/10, so over the next couple of weeks, I had to refresh my memory while learning new skills. Due to this lack of confidence, I was much more reserved during group conversations as I was trying to soak in as much information as possible. Consequently, I did not explicitly check if everyone felt as if their voice was being heard. Towards the end of the sprint, my confidence with Linux grew greatly. As a result, I was able to participate in group discussions and ensure that everyone was on the same page. 

As a team, our biggest developmental challenge was understanding when problems should be taken on individually or as a group. At the beginning of Sprint #1, my bias weighed heavily towards solving problems as a team as it would grant everyone equal opportunities to learn from hands-on experience. As we soon found out, this approach is costly in time and does not let people engage with topics that specifically interest them. The task that gave us the most hassle was getting the FrontEnd certified so that the Backend could connect with it. This task was our “group task”, which we used working classes to try and resolve. Beyond this “group task”, we each had individual tasks we would look into. This approach to distributing work across teammates and holding each other accountable for learning unique material has been effective so far. After learning a new tool or skill, I request that the individual creates documentation listing the steps or sources they referred to so that all team members have access to what the individual learned. So far, there have been no issues with this approach, and we will continue to refine it in Sprint #2.

As previously mentioned in my review of Apprenticeship Patterns, the pattern that has resonated with me the most this sprint was “Be the Worst”. This self-assessment pattern has allowed me to refresh my knowledge of Linux through completing tasks such as organization and documentation of our completed tasks. This pattern has the individual actively learn and listen to their teammates discuss current issues. From this assessment, the individual can learn from the shared knowledge of the group and will slowly catch up to their level of proficiency. In the case of Sprint #1, my Linux skills were beyond rusty, making me the least proficient in the group. Although I was not able to start helping on the server immediately, I was able to help record our steps. By doing so, I could educate myself on how we approached problems as they arose. Towards the end of Sprint #1, I found my confidence in using Linux and began contributing to the server.

Moving forward, I plan on interacting with the server more, and I have already begun that process by researching encrypted volumes. Additionally, I will continue to refine my skills as our group’s Scrum Master. Now that I am nearly as proficient as other members of my team, I can now focus on making sure we all understand our current goals and have everyone’s voice heard. In terms of teamwork distribution, we have struck a fair balance at the end of Sprint #1. If there is any issue in Sprint #2, I will have to reconsider how we organize and assign tasks. Fortunately, we now have assigned days for backlog refinement, so we can discuss what changes we would like to make during those periods.

-AG

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.

Apprenticeship Pattern Reflection

After reading Apprenticeship Patterns, I was left perplexed over some of the topics this book covered. To preface, I think that the authors, Dave Hoover and Adewale Oshineye, did a great job addressing how Software Development demands that individuals retain a ‘growth mindset’ throughout their studies and practice. My largest gripe is with how they frame individual progression. Hoover and Oshineye use an apprenticeship model to describe one’s progression. For starters, I believe that mastery does not exist and conflicts with the content covered in Chapter 2. The authors define mastery as “performing all the roles of an apprentice or a journeyman as well as focusing on moving the industry forward.”. Fundamentally this would be a fitting definition, as apprentices focus on learning, and journeymen focus on applying their knowledge. In practice, it can be difficult to find a balance between these two roles. Furthermore, masters are ‘pillars of knowledge’ as apprentices and journeymen refer to them for their experience. In this scenario, if the master has not found a balance between learning and practicing, then they may give ineffective or incorrect information. My solution to this dilemma is the erasure of mastery. In a field such as Software Development that is ever-changing mastery is nigh impossible to achieve. Hence, we must all dedicate ourselves to improving and finding balance between apprentice and journeyman responsibilities.

Despite having personal issues with their framework, as the content they covered in later chapters resonated with me. Chapters 3 and 4, Walking the Long Road and Accurate Self-Assessment respectively, shared ideas I agreed with the most. Chapter 3’s driving point is that we are all on the same path and that skills you see in others can be eventually learned by you. A year ago I found myself in a group of students learning Software Development. We were learning how to write tests in Java, which I had experience with from another course. In this group, we all knew the fundamentals of testing, but each of us had different skills outside of testing that we indirectly taught each other over the semester. Naturally, I was intimidated by the amount of knowledge my group members had that at the time I lacked. As the semester came to a close, that knowledge gap slowly closed as I found myself learning from seeing how they approached these problems. Chapter 4’s driving point is to not settle with your inadequacies, but to grow because of it. In this chapter, the authors suggest surrounding yourself with people who understand the material you’re trying to learn. Linux commands and Docker are some of my least knowledgeable subjects, but I chose them as my focus for this project to further develop my knowledge. Fortunately, I have been placed in a team where we all have varying degrees of skill with these subjects. I have been taking tasks involving team organization and documentation, as through writing these instructions and understanding our workflow have slowly begun to fill gaps in my knowledge.

-AG

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.

Blog #4: Software Frameworks

As I become a more knowledgeable developer, I hear the term ‘software framework’ being used more frequently. In my research, I’ve seen an increased interest in frameworks such as Angular and React. To understand what these are, I must first define what is a software framework.

In the article What is a Framework? Software Frameworks Definition, written by Joel Olawanle, walks through a general definition of software frameworks, comments on how they may be applied, and how they differ. Olawanle defines a framework as such, “…is a structure that you can use to build software. It acts as a foundation so you don’t have to deal with creating unnecessary extra logic from scratch.”(Olawanle). Supposing a framework is implemented correctly, developers will save time allowing them to start the project earlier. Additionally, the foundation it provides will not be prone to human error. If the components of a framework are implemented from scratch, then there could be errors that would be much more difficult to fix further in development. Since frameworks can be modified, there is less of a reason to implement their functionality from scratch.

Olawanle expands his definition to other aspects of software development. Before reading this article, I understood that there were both frontend and backend frameworks, but I was surprised to learn that mobile applications and data sciences have their respective frameworks. While reading Olawanle’s article, I noticed a framework I used a few years ago, that being Bootstrap. This qualifies as a software framework as it gathers files needed for a functioning website (.html, . css, and .js) into one structure. This allows the developer to easily make their website without having to build this structure. Angular and React both are classified as Frontend frameworks. Both of these are used for creating interfaces for websites, but each has its unique components. React can use JavaScript to create HTML and CSS files, meanwhile Angular has dependency injection allowing it to more freely communicate with other applications. Each of these creates a specialized purpose for the framework. 

Depending on where a developer is working (front end, back end, etc), there will be a framework that can create a strong foundation for their code. Before selecting their framework, the developer must weigh the tradeoffs between the various frameworks and consider which would support their project principles the most. Using a framework in this scenario would save time and reduce any human error manually implementing these components introduce. In my experience with Theas’ Pantry, the backend does not use any framework listed within Olawanle’s article. After reviewing the documentation, it seems as if the back end would not support these listed frameworks as they are not directly language-compatible. This means more time and resources would be invested into implementing the framework, than potentially implementing components of such individually. Frameworks provide a strong starting point for development teams, depending on the design choices frameworks may be incompatible with certain projects.

Link to Article:

https://www.freecodecamp.org/news/what-is-a-framework-software-frameworks-definition/

-AG

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.

Blog #3: Software Documentation

Communication with those who help construct it or those who will use it is an essential component of any project. Software documentation serves both these ends, as it informs anyone approaching the project of topics such as the requirements used to run said software, instructions, and myriad other facets of the project. Documentation is created for those interested in the specified software/project. The Swimm Team, in their article What is Software Documentation? Types, tools, and best practices, list two types of documentation. These are external and internal documentation.

External documentation seeks to explain all aspects of the software that are not locally accessed. Most of this documentation involves user interaction. This is seen through End-User documentation, which provides the user with instructions on using the software, or through Just-In-Time Documentation, which guides the user while the program executes. A more technical piece of documentation, API documentation, is also considered external. Although most users may not directly interact with the API, this type of documentation is targeted towards developers who may want to expand the API functionality or use it in their projects. Due to the accessibility of external functions, its documentation must be equally accessible to its audience. A level of abstraction must be provided to reach this goal, as uninformed users do not need to understand how the internal systems of the software function, but rather they must learn how to interact with it.

Internal documentation refers to everything ‘behind the scenes’ so to speak. Contrary to external documentation, this documentation is less accessible, meaning it can go into detail on how the systems of the specified software work. Due to this complexity, it serves as a great introduction to the project and can be used to onboard new developers. Internal documentation ranges from information regarding the development cycle of the software, as seen through scheduling documentation,  to design choices seen throughout, which can be found in Software design documents. During the development process, software engineers can refer to internal documentation to ensure their contributions follow the team’s vision for the software.

My experience with software documentation has been limited up until this year. My most recent experience with documentation was through the HFOSS project Theas’ Pantry. Within this project, there was API documentation, available through a .yaml file, which listed all functions and associated code with all API calls. Additionally, there was indirect documentation offered through activities introducing onboarding developers onto the project. Each of these activities introduced a broad topic, such as software architecture, which then transitioned into how it’s applied to Theas’ Pantry. This untraditional form of documentation allows the onboarding developer to interact with the components in a microcosm before they apply their knowledge on the project. Documentation is a vital component of software development. Without supporting texts such as this, clients interested in the project would be met with inaccessibility, and onboarding (in addition to current) developers may not understand the project as a whole which could jeopardize the production of such. 

Link to Article:

https://swimm.io/learn/software-documentation/what-is-software-documentation-types-tools-and-best-practices

-AG

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.

Blog #2: Anti-Patterns – Explained

Anti-patterns are best described as behaviors or approaches to problems that conceptually may help solve the problem, but in practice are a detriment to the process of doing so. In software development, this can come in many forms, whether ‘cutting corners’ by reusing old code or trying to condense behaviors into one class/object. Ultimately these decisions we make as developers come from a place of genuine concern. When these design patterns remain unchecked, they begin to rot in our code and cause many problems, some of which are contradictory to the intention of originally incorporating them.

In the article Anti patterns in software development, the author Christoph Nißle describes several anti-patterns that occur in software development and the consequences of each. Three anti-patterns resonated most with me, as I could see how someone could accidentally implement one of them. The first of which is what Nißle calls Boat Anchor. It represents code that *could* be used eventually, but for the time being, has no relevance to the current program. By keeping this code, the developer is contributing to visual bloat. Not only does this make finding specific lines harder, but once other developers are included on the project they may have questions about how this code will be implemented. To counter this anti-pattern it’s good practice to only keep code that is prevalent to the program’s functionality AND is currently being used by the program. The second anti-pattern I found interesting was Cut-and-Paste Programming. As the title suggests, it occurs when programmers reuse code from external sources without properly adapting it to their current project. This code can also come from the same program. Under both circumstances this code will cause errors, as it’s not a ‘one size fits all solution’, furthermore the code being pasted could have errors. These can be remedied by “creating multiple unique fixes for the same problem in multiple places”(Nißle), but each unique fix requires time and this time could have been spent creating code for the specific problem rather than reusing code. Lastly, the Blob pattern is one that I have personally fallen victim to several times. This pattern has the developer trying to make objects/classes as dense with functionality as possible, but this complexity acts against the single responsibility principle. Classes (and objects) should be solely responsible for one behavior if we include too many then the function of that specific class becomes unclear. The Blob pattern can easily be fixed by dissolving the blob class into several single-responsibility classes. It’s best to catch poor practices such as the Blob early in development to minimize the amount of refactoring that’s needed to fix the code.

As mentioned before, I’ve fallen victim to these anti-patterns as conceptually they save time in the development process. However, the time often saved is eclipsed by the time required to fix errors later in development. Properly following design principles will cause development to require more time, but it should reduce the number of errors that would appear if anti-patterns were used in their place.

Link to Article:

https://medium.com/@christophnissle/anti-patterns-in-software-development-c51957867f27

-AG

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.

Blog #1: Introduction to APIs

In our work with REST APIs, namely through the HFOSS project Thea’s Pantry, we have implemented new functionality to the database by updating the HTML specifications and creating new endpoints. During this whole process I did not have a concrete idea of what an API was, nor did I understand what made REST APIs any different from their alternatives. 

In the article What is a RESTful API the authors Stephen Bigelow & Alexander Gillis define what an API is, and what components make an API RESTful, in addition to how they can be used. APIs are defined as “code that lets two software programs communicate with one another” (Bigelow & Gillis). This can be seen through our work in Thea’s Pantry as the specification.yaml file provides instructions for the commands which communicate between the backend and database. In a general flow of control the user interacts with software, this piece of software interacts with the API which then shifts control to the external software. From this point the user can directly interact with the external piece of software (in the cases of methods such as delete and put), or the user can fetch information from it which can be returned to their client-side software. REST stands for representational state transfer, this is a type of software architecture that makes communication between two programs more accessible and easy to implement (Bigelow & Gillis). Users can interact with resources from another program using HTTP requests composed of a method, endpoint, header, and sometimes will require a body. RESTful commands, similar to those of databases (get, update, delete.. etc), can be specified by the developers of the API to have unique functionality. This modularity of command functions is one of the benefits of using RESTful APIs. An alternative to RESTful APIs is SOAP. These both achieve similar functionality, but the methods of doing so are different. For example, SOAP is a communication protocol compared to REST which is an architecture style. SOAP is only compatible with .xml files, meanwhile REST can be used with .xml in addition to other file types. It is worth noting that REST and SOAP are not one-to-one alternatives and can be used together. 

APIs allow developers to extend the functionality of their programs by communicating with other programs. This can be achieved through HTML requests (in the case of RESTful APIs) and nodes (in the case of SOAP APIs). REST APIs favor flexibility and modularity, on the other hand, SOAP APIs are more rigid and require concise specifications. Due to its accessibility, RESTful APIs are more favorable in projects such as Thea’s Pantry. I cannot see SOAP being implemented in Thea’s Panty due to its rigidity as seen through the types of files it uses. REST is much preferred here as we can use javascript files to define the HTML requests that the API will use.

Link to Article:

https://www.techtarget.com/searchapparchitecture/definition/RESTful-API

-AG

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.

Introductory Blog #3

Hello to all readers new and old, thank you for taking your time to read this brief introduction to my blog. My name is Andrew George, and I am a senior at Worcester State University. For the past couple years I’ve been studying Computer Science and slowly adding skills to my repertoire. I am once again looking forward to researching topics pertaining to my major, this time with a focus on the Software Design process. I do encourage you, if you have not already, to read my previous introduction pieces found on this blog as those have a more in depth explanation of where I started and how I got to this point today. Thank you once again and until next time!

-AG

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.

Blog #8: Intro to Security Testing

Throughout the cumulative experience I’ve had with testing, most of it focuses on the program’s logic and ensuring that it yields a correct result. One aspect of testing I have no experience with is security testing. Here one must find flaws within a system/program’s security and report them to developers so they cannot be exploited later in the product’s lifespan. Security testing has much higher stakes than that of unit testing as vital information such as consumer personal information and system source code may be leaked if there is a security breach. Therefore, testing security is of utmost importance when releasing a service to the public, as failing to do so will damage the service’s integrity.

Security testing hosts several different types of tests, each of which focuses on different aspects of a system. The article Security Testing posted by user pp_pankaj highlights the principles upheld by this testing and what each test achieves. Some of these tests, Posture Assessment, I found quite interesting. Posture Assessment combines the testing methods of ethical hacking, risk assessment, and security scanning into one report to provide an overall security posture of a system (pp_pankaj). Each of these subtests has a shared goal of having a hacker, hired by the development team, find security vulnerabilities within their system and report it to the team. Another form of testing I found interesting was social engineering testing. This deviates drastically from what we as programmers come to understand tests as. These are emulated attacks through communication such as email. The purpose of this test is to train developers to avoid suspicious engagement and to find new ways to breach a system without making direct contact. Whether a development team gets successfully breached through the socially engineered test is dependent on the team’s understanding of who they must respond to. A few weeks ago I was researching a data breach that happened earlier this year at Microsoft. Hackers were able to control a testing account and had direct access to employees on the project. From here they were able to obtain information they naturally should not have access to. All of this occurred due to developers not knowing that they must not communicate with a testing account.

A general metric as to whether security testing is vital to a project is to consider whether your product is liable for holding personal information that is not your own. If this is the case, then it’s in the development team’s best interest to uphold their principle of confidentiality and integrity by running security tests throughout the lifespan of your product.

-AG

Source: https://www.geeksforgeeks.org/security-testing/

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.

Blog #7: Intro to Combinatorial Testing

Beyond Unit Testing, there are several ways one may go about testing a developed system. One of these ways is through combinatorial testing. While researching this topic I’ve seen a couple of steps that I’ve taken in testing that match with this testing method. For example, earlier this semester I learned about how behavior tables can help guide Unit Testing by showing what aspects of a program will be covered by one test. Combinatorial testing achieves a similar effect by first taking all possible inputs (from a pool of predetermined inputs) and then creating a set of tests that will test each unique combination from the pool. A source that helped me grasp this topic is Combinatorial Testing by Shanika Wickramasinghe. In this article, Wickramasinghe provides an example of how they would develop tests. It’s important to note that in this example only combinations are created, NOT permutations, meaning that overall much fewer tests would be needed to fulfill a combinatorial test. This does raise a question for future reading of whether there is such thing as “permutative” testing and how that and combinatorial testing differ.

Using combinatorial testing does provide benefits despite the time it may take to achieve a successful test. Combinatorial tests are all designed to try multiple inputs simultaneously, meaning that both single-fault and multi-fault assumptions will be made in a full combinatorial test. Once these tests are complete, the developer can better understand which inputs cause a problem within their code. Additionally, once the pool of potential inputs is determined, the tester will have a set number of tests they must conduct. These tests may find faults in the program that require specific input that the development team may not have accounted for. Through feedback such as this, the development team can resolve the bug and create ways of handling errors caused by unexpected input. These benefits do come with equally heavy drawbacks. Manual combinatorial testing is possible, however the testers may struggle with creating combinations from a larger input pool. A way this can be solved is by using an automated combinatorial tester. It’s important to note that this can be limited by how intensive the tests are on the hardware of the automated tester. Lastly, the combinations that the test may provide could be so random that it’s nonsensical to test such a thing. This becomes an issue of resources which will vary from developer to developer. Ultimately whether one uses combinatorial testing or not is up to the developer. There are some instances where the cost of conducting one of these tests would be beneficial to the development process, but this is not a “one size fits all” type of test. By using some of the team’s resources, whether it be labor or hardware-bound, combinatorial tests will yield meaningful results as to which areas of the program need further testing.

-AG

Source: https://testsigma.com/blog/combinatorial-testing/

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.