Category Archives: Week 5

Data Redundancy – Relevance in Software Systems and Websites

In today’s world, businesses, organizations, and other entities that software and web developers consider “clients” heavily rely on being able to efficiently collect, access, and otherwise manage data for their day-to-day operations. For many, losing access to databases or similar outages hinders their ability to continue operations. In Data Redundancy: Meaning and Importance, author Charlotte White discusses data redundancy and some basic strategies and implementations to address these vulnerabilities.

Data Redundancy goes beyond simply having backups of existing data (although they’re an important component), it’s a proactive plan to prevent data loss and maintain smooth operations in the case of a server shutdown, hardware malfunction, or other major disruptive issue. It’s crucial for ensuring the continuity of business operations as website downtime often leads to financial losses especially for new websites or those with low traffic. Outages can also impact search engine rankings, as uptime is a factor commonly considered by search algorithms. Furthermore, failure to do so resulting in data loss can result in crashes/issues in other systems, loss of customer information, business details, and other critical and/or confidential information which is essential for an organization’s success and reputation.

How Redundancy Works: Effective redundancy designs reduce dependency on any single copy of data or data center. They commonly implement a 3-2-1 rule of backups, which means having three copies of data in two different locations, one of which is offline storage. Redundancy strategies should also consider factors like hardware redundancy; many servers use hard disk drives (HDDs) to store data which can fail due to simple wear and tear. Some hosting companies use RAID (Redundant Array of Independent Disks) and un-RAID solutions to mirror data from HDDs to other storage devices, minimizing the impact of HDD failures.

Recently in CS343, we’ve been looking at software architectures and strategies for organizing systems that could be realistically implemented to address clients’ needs. In particular, we’ve been considering the differences and strengths/weaknesses between a simpler architecture such as the Monolith versus a more complex architecture such as the MicroServices model, with several intercommunicating systems.

Most of the scenarios we discussed involved the ease of pushing out updates, but I was left wondering about the repercussions and ways to manage the possibility and reality of a database or system going totally offline. For businesses involved in eCommerce, uptime is money in terms of sales as well as maintaining search engine optimization. Given how damaging a disruption like this could be, data redundancy plans are an important consideration when planning and setting up a website or system. Understanding the value of D.R. and how they are implemented is an asset in planning and designing software systems and projects, and generally beneficial for computer science students and professionals.

Source:
1. Data Redundancy Meaning and Importance: A Complete Guide | ResellerClub India Blog

From the blog CS@Worcester – Tech. Worth Talking About by jelbirt and used with permission of the author. All other rights reserved by the author.

Week of October 9, 2023

https://www.atlassian.com/microservices/microservices-architecture/microservices-vs-monolith

Since learning about different software architecture styles like the monolithic architecture, the client-server architecture and the microservices architecture, I’ve been curious how large-scale applications transition from one architecture to another as the project grows in scale. I found this blog post on the Atlassian website breaking down the differences between the monolithic architecture and the microservices architecture, as well as telling the story of Netflix’s innovative migration from a monolithic architecture to a microservices architecture.

The article begins with the example of Netflix’s transition between architectures. Netflix was growing rapidly by 2009 and needed to expand its software infrastructure to meet the massive demand. Before “microservices” as a term was in wide usage, Netflix was one of the first major companies to migrate to a microservices architecture, and in 2015 earned a JAX Special Jury award for its successful deployment. Netflix’s new architecture would model itself on DevOps, defined by Amazon as “the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity (https://aws.amazon.com/devops/what-is-devops/#:~:text=DevOps%20is%20the%20combination%20of,development%20and%20infrastructure%20management%20processes.)”.

Following the story of Netflix’s change in infrastructure model, the article continues with an explanation of the monolithic architecture style. The monolithic architecture is a traditional design, which is defined by the application being housed in a single self-contained server. This architecture is simple to understand and easy to use as a foundation for your application. The major drawback of the monolith, however, is the difficulty in updating the application. Making changes to the code base requires bringing the entire service offline. Monolith architectures are not scalable with the growth of the application either.

The microservices architecture addresses some of the disadvantages that come along with a monolith architecture. The application is divided into independent services each with their own databases and methods. With this architecture model, only the components of the application that require changes need to be taken down, leaving the other components of the application free to continue working.

One inherent barrier to using the microservices architecture is the expense of multiple machines to host the different microservices, as well as storage space for their accompanying databases. It may only be beneficial for an application to transition to a microservices model once it has reached a certain scale. Small to mid-size applications may be perfectly well served by monolith architectures for much less cost than hosting the application across a microservices architecture.

From the blog CS@Worcester – Michael's Programming Blog by mikesprogrammingblog and used with permission of the author. All other rights reserved by the author.

Software Development Methodologies

The blog I have chosen to write about this week is a article called “12 Best Software Development Methodologies” by intellectsoft to research and learn more about the different types of software development methodologies. The reason why I chose this article in particular is how in depth it goes into each methodology, the pros and cons, and when to choose specific methodologies. The article mentions which I found very interesting is how as software becomes more advance with time, companies spend more and more on researching and improving past development methods.

In class this week we have gone over what the steps are in software development as well as waterfall and agile methodologies. When reflecting on the waterfall technique, it does not seem very practical to use with changes not being possible without restarting the entire development process again, however the article states how there are relatively no financial risks “due to the high planning accuracy” and every step has a given deadline. In addition, the long delivery time may be caused if not everyone working on the project are not on the same page. This method is not suitable for larger or on-going projects.

The agile development method that focuses on the project / product itself. In class, we watched a video on Agile of what the sets of values Agile had which included “Individual and interaction over processes and tools” as well as “Responding to change over following a plan”. As Agile is a very flexible and on-the-go methodology it risks insufficient budget predictability. The article states how this method “fits, young companies … open for communication” which provides top of the line product quality.

A method that I found interesting that was not talked about in class is the Spiral Development Model. This method seems to be a “hybrid” of both waterfall and agile. Like the waterfall method this method is done in phases as well as has an emphasis risk management. In addition, similar to the agile method it has client collaboration throughout the process in increments. This method appears to have the best of both worlds of these two methods however it is not suitable smaller projects.

When reflecting on software development processes there are many different factors to consider when following a methodology. Some of these factors include cost, project size, risk tolerance, client suggestions and so forth. When looking at smaller projects that I may collaborate with a team I may consider using agile methodology or lean development which both receive feedback throughout the process as well as flexibility.

Link https://www.intellectsoft.net/blog/top-12-software-development-methodologies-you-should-know/

From the blog CS@Worcester – Anthony Duong CS Blog by anthony duong and used with permission of the author. All other rights reserved by the author.

Software Development Methodologies

A software development methodology is the methods and the different sets of workflow techniques that are used in order to design different I.T. software solutions. As I’ve been learning, there are many different types of these methodologies. According to Synopsys, the four most popular methodologies are Agile development methodology, DevOps deployment methodology, Waterfall development method, and Rapid application development. In my class, I’ve been learning about the Agile development methodology and the Waterfall development method. I think it is really interesting how there are so many different methods in developing software that all lead to the same outcome, and I’m interested in learning more about other methods. I think it will be very helpful to learn these different methodologies as some are more effective in certain situations. Synopsys has really helped me because it shows two of the methods I already know and allows me to get a better understanding of them, but also allows me to learn a couple new methods.

The first methodology that I learned about, the Agile development method, involves repeating all of the steps in short increments. The main reason that this method is used is to help minimize risk when adding new functions to the software. These risks can include bugs, changing requirements, and cost overruns. Like all of the methods, the Agile development method has both pros and cons. Some pros include allowing the software to be released in different iterations, not just one. This helps the developers fix bugs and change things early on in development. Another pro is that it allows users to learn the benefit of the software earlier, rather than waiting for the entire software to be incremented. However, there are also some cons to this methodology. One major con is that new users are often behind and not able to get up to speed, as they don’t have the documentation needed. This is because it relies on real-time communication. Another major con is that Agile development methods may not be as helpful in large organizations as they are used to other methods, such as the waterfall method.

The second method that I learned about is known as the Waterfall method. Unlike the Agile method, the Waterfall method performs each step in order and doesn’t move on to the next step until the previous step is completed. On top of that, each step is only performed once. The Waterfall method consists of different steps, otherwise known as sequential phases, which each have their own goals. These phases are Requirements, Design, Implementation, Verification, Deployment, and Maintenance. This method is also known to many as the traditional method, as many different companies use it. A major pro to using this method is it being easy to understand and manage, as there is only one step with one goal going on at a specific time. This allows less experienced teams and managers to understand and benefit from this method. A major con to this method is that is can be very slow and also very costly. This is because of the way that it is structured, as well as its tight controls. I haven’t learned about the other two methods on Synopsis, the DevOps deployment methodology and the Rapid application development, but I think this website is a great source to learn more about them, and I’m excited to do some more research on these topics.

Synopsis: https://www.synopsys.com/blogs/software-security/top-4-software-development-methodologies.html

From the blog CS@Worcester – One pixel at a time by gizmo10203 and used with permission of the author. All other rights reserved by the author.

Exploring Git

Addressing Merge Conflicts on GitHub

In the fast-paced world of software development, collaboration is the lifeblood that fuels innovation. One of the most powerful platforms for collaborative coding is GitHub, a platform that has revolutionized the way developers work together on projects. However, this collaborative utopia is not without its challenges, and one of the most notorious hurdles developers face is the dreaded “merge conflict.” To shed light on this issue, I recently explored the blog post titled “How to Resolve Merge Conflicts in GitHub” on HubSpot’s website. In this blog post, I will share my insights and lessons learned from this valuable resource.

Understanding Merge Conflicts

The HubSpot blog post begins by elucidating the fundamental concept of merge conflicts. A merge conflict transpires when two or more contributors to a Git repository make concurrent modifications to the same file or lines of code. When an attempt is made to merge these conflicting changes, Git finds itself in a quandary, unable to discern which version to prioritize. The result? A conflict that requires manual resolution.

Resolving Merge Conflicts

The most invaluable takeaway from the blog post is the comprehensive guide on resolving merge conflicts. Here’s a summary of what I’ve learned:

  1. Identify the Conflict: GitHub will promptly notify you if your PR encounters a conflict. These conflicts usually arise in files that have undergone concurrent modifications on different branches.
  2. Locate the Conflict: Git marks the conflicting sections within your code with distinctive markers, including <<<<<<< HEAD=======, and >>>>>>> branch-name.
  3. Manually Resolve the Conflict: The HubSpot blog offers a detailed walkthrough on how to address these conflicts manually. It involves carefully reviewing the conflicting code, deciding which changes to keep, and removing the markers.
  4. Commit the Changes: After successfully resolving the conflict, you need to stage the modified file using git add and commit the changes using git commit -m "Resolved merge conflict in file-name".
  5. Update the Pull Request: By pushing the resolved changes to your branch with git push, the blog explains how to update your PR automatically with the resolved conflict.
  6. Merge the Pull Request: Finally, once the conflict is resolved, and your PR meets all criteria, it can safely merge into the main branch, ensuring the seamless progression of the project.

    Through this journey, I’ve not only learned how to merge an issue on GitHub but also gained insights into resolving conflicts using Git. So, the next time I encounter a merge conflict, I won’t panic—I’ll simply follow these steps, and I’ll be well on my way to becoming a Git pro. Happy coding!

    My blog was based on the new things that I read on this blog link https://blog.hubspot.com/website/merge-conflicts-github

From the blog CS@Worcester – Coding by asejdi and used with permission of the author. All other rights reserved by the author.

Exploring the Strategy Design Pattern in Software Development

I recently came across a fascinating article that I believe directly relates to our course material as it is the focus of our current Design Patterns Homework. In this blog post, I will provide a summary, share my reasons for selecting this resource, offer my personal insights, and discuss how this newfound knowledge can be applied to our future practice as software developers.

The resource I found is an article titled “A Beginner’s Guide to the Strategy Design Pattern,” available on the FreeCodeCamp website. This article serves as an introductory guide to the Strategy Design Pattern in software development. It outlines the pattern’s purpose, components, benefits, use cases, and best practices for implementation. The core idea of this pattern is to encapsulate a family of algorithms, making them interchangeable at runtime.

Why I Chose This Resource

I selected this resource because I was struggling with understanding the homework assignment and this article helped me better understand the strategy design pattern. Moreover, the article provides practical examples and clear explanations that make it accessible to beginners like myself.

Reflections on the Content

The article begins by explaining the core concept of the Strategy Design Pattern. It emphasizes the benefits of encapsulating algorithms into interchangeable strategies, including improved code flexibility, re-usability, and simplified testing. I found this concept to be highly relevant to our studies, as it promotes clean and maintainable code, a fundamental skill for any software developer.

The article discusses real-world use cases for the Strategy Design Pattern, such as sorting algorithms, validation rules, and payment processing. These examples helped me see the pattern’s practical application in various scenarios, and I can envision using it in my future projects.

Additionally, the article provides a step-by-step guide on how to implement the Strategy Design Pattern in Java, breaking down the process into clear, manageable steps. This hands-on approach was incredibly valuable as it demonstrated how to apply the theoretical knowledge in a real coding scenario much like the one seen in the Duck Simulator.

Application to Future Practice

Understanding the Strategy Design Pattern will undoubtedly benefit us in our future practice as software developers. Here’s how:

  1. Code Flexibility: By using this pattern, we can make our code more adaptable to changing requirements. It allows us to swap out different strategies at runtime, making our software systems more versatile.
  2. Re-usability: The Strategy Design Pattern promotes the re-usability of code. We can create a library of interchangeable strategies that can be applied to various projects, saving time and effort.
  3. Clean Code: Implementing this pattern encourages clean coding practices by separating concerns and reducing code complexity. This results in code that is easier to read, maintain, and debug.
  4. Testing: With strategies separated from the main object, testing becomes more straightforward. We can test each strategy in isolation, ensuring that it functions correctly.

Conclusion

In conclusion, the Strategy Design Pattern is a valuable tool in software development, and I believe this article provides a solid foundation for understanding and implementing it. As future software developers, mastering design patterns like this one will be essential for creating efficient, maintainable, and flexible code. I encourage you to read the article and explore this pattern further to enhance your skills in software development.

From the blog CS@Worcester – Abe&#039;s Programming Blog by Abraham Passmore and used with permission of the author. All other rights reserved by the author.

Exposing The Ignorance?

Exposing ignorance is apart of the learning process, and assuming that the person that this pattern is for is a apprentice, it’s safe to assume that having ignorance is apart of the learning process. What I disagree on when it comes to this pattern specifically is that it is hard to imagine that if your an apprentice, it would be hard to be in a situation where the fellow co-workers or employee assume that you have everything under control, that’s if the pattern in context is being aimed at apprentices. In my opinion, this pattern seems to be more for the journeyman if anything. A position where a person has the experience of a craftsman, but doesn’t know everything about the craft or like the author mentions, they will mistaken expertise for good craftsman, which seems to be two different things.

Although, what I do like about the pattern is that the author emphasizes that learning how to do something isn’t enough- its about how one learns the task that is more important than anything really. In his quotes alone the author explains that “Expertise is a by product of the long road we’re on all on, but it is not the destination.” This make me have a different perspective of the learning process, it can be a tough road, but that seems to be the point. Many times I tend to find shortcuts of how I can get something done faster because, like anyone, it’s relief to know that something is completed and out of the way, but I never end up truly understanding the task. Sometimes getting things done faster and without obtaining substance is a good thing, and needs to be done with good reason, and with that being said if I’m in a situation where I have to get something done, then afterwards I’ll spend some of my free time trying to fully understand it. Learning how to fully take advantage of the learning process is how one becomes a good craftsman.

Sources:

Hoover, Dave H., and Adewale Oshineye. Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman. O’Reilly, 2010.

From the blog CS@Worcester – FindKelvin by Kelvin Nina and used with permission of the author. All other rights reserved by the author.

Apprentice Pattern: Expose Your Ignorance

This week I continued with chapter 2 of Apprenticeship Patterns once again. The pattern I read was Expose your Ignorance. The context this pattern gives is the people paying you to be a software developer are depending on you to know what you’re doing. The problem with this is your team members and manager need confidence that you can deliver but you are unfamiliar with some of the technologies.

The solution to this problem is to show the people depending on you that the learning process is part of delivering software and to let them see you grow. Software developers build strong relationships with clients and colleagues so telling the truth about being in the learning process instead of telling them that you know how to do something you don’t is important. Doing this will build your reputation on your ability to learn and not what you already know. Asking questions is a good way of exposing your ignorance. Those who do not take on the process of exposing ignorance become experts in one domain and develop a narrow focus that is important for the industry to have experts but it should not be the goal of an apprentice. 

The action suggestion is to write a list of five things you don’t understand about your work. Put the list somewhere others can see it. Then get in the habit of refreshing this list as your work changes.

The reason I chose this pattern is that I am expecting to run into this situation when I get my first job as a software developer and will probably face this problem during my whole career because there will always be some new technology I do not understand. I found the solution given to make sense and it brought up some good ideas but the action plan I disagree with a bit. I think it is a good idea to list off things that need to be worked on but I don’t think it needs to go in a place others can see. As long as the list is refreshed frequently and the skills are being worked on I think it’s fine to keep it to yourself. Of course, the part about being honest with others about not understanding things and asking questions should still apply.

From the blog CS@Worcester – Ryan Klenk&#039;s Blog by Ryan Klenk and used with permission of the author. All other rights reserved by the author.

week-5

I want to say hello in the fifth week of my blog and write a new entry. March is my favorite month, so I’ve enjoyed myself as much as possible. In Boston, there are many things to do, including drinking and attending social and entertaining events simultaneously. I have an upcoming spring break relatively soon. It will be enjoyable to go over everything and take care of tasks in preparation for the impending graduation from college.

Now that I’ve finished the information presented in chapter 4, I will go on to the fifth chapter, which is about ongoing education. Throughout my research, I encountered an important theme: “Reflect As You Work.

This pattern appeals to me since it is relatable to anyone who puts in the effort and gets things done; that way, people may reflect on what they’ve learned and how they’ve improved. This pattern appeals to me since regular introspection and questioning of one’s practices are vital to preparing for elevation to senior posts. Regular introspection and questioning of one’s courses are something I do. Even with explicit reflection and noting changes in one’s set of methods, it is possible to develop fresh ideas by observing more experienced developers and reflecting on their rules.

But, I disagree with other components of the practice by not believing that experience automatically equates to expertise; becoming proficient should be the aim.
On the other hand, it is possible to urge individuals to sketch out a Personal Practices design to investigate and challenge existing practices and contemplate the possibility of adopting alternative methods of accomplishing goals.

Have you noticed that the way you think about the work you want to do in the future or the career path you want to take as a whole has changed due to the practice?

In engaging in the “Reflect As You Work” exercise, I can get insight into the things I have accomplished, the shifts I have made, and the areas where there is room for improvement and enhanced quality of life. When it comes to employing this strategy sets the stage for my future profession, as it will allow me to save some time and avoid some hassle while also providing me with a fresh learning experience that I can share with others who face the same challenge.

From the blog Andrew Lam’s little blog by Andrew Lam and used with permission of the author. All other rights reserved by the author.

YAGNI!

While looking through my blogs, I came across a familiar acronym that I used all the time when it comes to developing software and system. The acronym is called “YAGNI”, which stands for “You Ain’t Gonna Need It” according to the blog “Automation Principles – YAGNI/Premature Optimization, It’s the principle of extreme programming that states a programmer should not add functionality until deemed necessary. The blog takes about how many engineers will spend multiple hours trying to build the “right system” the first time. In some cases, trying to build a flawless system in the first go can be rather difficult to achieve. The problem is that programmers spend too much time worrying about efficiency in the wrong places and having that premature optimization can cause more harm than good. The blog goes over Big- O notation which explains that it does not care about constants but the long-term growth rate of functions. This is a good rule to consider because having to introduce something before a fraction of the code is even written can make a program a lot more difficult to support as explained, it would increase design considerations, the likelihood of race condition, and the ability to troubleshoot. Optimizing certain processes might not lead to any time savings or real optimization. In fact, it could do the exact opposite, a good example that the blog states are when using Python, constructing lambdas and list comprehensions over simple for loops. The blogger has mentioned that in his personal experience he would add non-functional requirements, such as authentication and logging, too early, adding features before needed. With that being said, I remember spending so much time on adding the ability to connect my bank to my finance application, that I didn’t have the time to code the application itself. The blogger talks about network automation which explains more about how networking is all about speed and not creating YAGNI isn’t really in the cards. They would go into detail about real-world examples when it comes to the network automation process, explaining issues about multithreading, in which he explains that overloading the TACACS server with too many requests at once is very problematic, or scaling wide too fast can cause processes to slow down and too much resource utilization, overall, it’s very inefficient. Configuration Generation takes too long and is very inefficient, and with all these in mind, the blogger isn’t trying to not consider tomorrow’s problems but is more in line with building things up as they go.

“Automation Principles – YAGNI / Premature Optimizations” :

https://blog.networktocode.com/post/Principle-YAGNI/

From the blog CS@Worcester – FindKelvin by Kelvin Nina and used with permission of the author. All other rights reserved by the author.