Category Archives: Week 5

Exploring the Classical Waterfall Model in Software Development

In most respects the classical waterfall model serves as the foundational software development life cycle (SDLC) model, almost embodying a structured and sequential approach to project management and software development which can prove effective when doing a variety of coding projects. While it may not be as commonly employed today, its significance lies in being the basis upon which other SDLC models have evolved, the process often involving steps and details with which have been planned beforehand. This model finds its relevance in the realm of more large, complex projects, being a model characterized by its rigorous, phase-driven progression, making it suitable for scenarios where project requirements are well-defined, and project stakeholders seek a high level of confidence in the outcome.

The waterfall model, although now less prevalent in contemporary software development considering it’s lesser effectiveness compared to more agile methodologies , it remains a foundational framework for understanding software development life cycles. This model’s structured, sequential approach entails phases like requirements gathering and analysis, design, implementation, testing, deployment, and maintenance, each building upon the preceding one. It is a document-driven model, placing high importance on quality control and rigorous planning, thus ensuring that the project is well-defined and the team operates with clarity and precision.

it becomes pretty clear that the simplicity and linear progression that come with the waterfall technique offer advantages for specific project scenarios. This approach favors discipline, with a focus on defining requirements before design likewise with the design before coding. For smaller, well-understood projects, it can be effective in maintaining clarity and ensuring milestones are met.

At the same time though, the rigidity and limitations of the waterfall model become apparent in more complex, dynamic projects. Its lack of flexibility to accommodate changing requirements and late defect detection pose significant challenges. The sequential nature of the model restricts stakeholder involvement in later phases, potentially leading to misunderstandings and costly revisions.

in practice, project managers and development teams should carefully assess project requirements, size, complexity, and the degree of uncertainty to select the most appropriate SDLC model since the waterfall method might not always be effective, sometimes proving to be an unwieldy method for projects better suited to adaptability. Moreover, hybrid approaches, combining elements from multiple models, can offer the best of both worlds, allowing for structure and adaptability.

In conclusion, the classical waterfall model, while valuable for certain projects, is not a one-size-fits-all solution. Its use should be considered in situations where requirements are well-defined and change is unlikely, such as large-scale, safety-critical, or government projects considering these have a tendency to have big budgets and therefore need to be mapped out when taking into account the money spent on particular projects. In today’s rapidly evolving software landscape, more adaptive SDLC models have gained prominence, offering flexibility and responsiveness to changing needs.

https://www.geeksforgeeks.org/software-engineering-classical-waterfall-model/

From the blog CS@Worcester – CSTips by Jamaal Gedeon and used with permission of the author. All other rights reserved by the author.

YAGNI

YAGNI is an acronym for You Ain’t Gonna Need It. It’s a principle from extreme Programming that says that programmers should only add functionality once it is definitely necessary. When coding if you are sure that you will need a piece of code or a feature later on, you don’t need to implement it now. Maybe you wouldn’t even need or add it because you might need something else. This is why you don’t want developers to waste their time creating extra elements that might not end up being necessary and can slow the process. YANGI helps save time and avoid spending time on features that might not be used, the main features of the program are developed better, and less time is spent on each release. When you have a problem that you can’t solve, you won’t be capable of making the best choices when coming up with a solution. On the other hand, when you know what is causing the problem, you can come up with a better plan to solve it. In software development, you can think about creating a system that can deal with everything but would only use a few features and could need attention and upgrades.

YAGNI can be implemented by development teams from small to large, so it isn’t limited to only small projects or large enterprises. This principle can help set up a task list of do’s and don’ts. Always try implementing the selling feature and get the app ready for end users. After the app is functional you can start adding extra features in the next version. Waiting to add any additional features will save a lot of time and effort for developers to help them meet project deadlines. Once your app is live you should keep up with its updates and be able to make the app better. By delaying the app’s updates to add more features can give opponents a chance to take your users. The first version of the app doesn’t need to be perfect, if it can just do the simple things and still fulfill its intended purpose then that is enough. With time you can add in all the add-ons you need later on instead of just cramming it into one version. The you ain’t gonna need it principle is very time effective and efficient for developers so that we could get our projects done on time, not adding anything that isn’t necessary at the time, and make sure that developers don’t feel stress by making sure all of the add on features needed to be added. This principle is time, stress and cost efficient for developers, which is why this principle should be used constantly.

https://www.techtarget.com/whatis/definition/You-arent-gonna-need-it

From the blog CS@Worcester – Kaylene Noel's Blog by Kaylene Noel and used with permission of the author. All other rights reserved by the author.

Unveiling the Blueprint of Software Architectures: The Foundation of Digital Development

In the intricate world of software development, one essential factor underpins the creation of every digital marvel – software architectures. These structural frameworks are the unsung heroes, the master plans guiding the intricate construction of software applications. They serve as the invisible hand that shapes the organization of an application, defining its key components, the relationships between them, and the fundamental principles that govern their interactions.

Software architectures, though often behind the scenes, are pivotal in crafting software that’s not just functional but also efficient and tailored to meet specific requirements. They’re akin to the architects of a grand skyscraper, ensuring that each piece falls into place seamlessly, resulting in a robust and scalable digital structure.

Understanding the diverse architectural styles empowers developers to choose the right path for their projects. It’s akin to a skilled craftsman selecting the finest tools and materials for a unique creation. The choice of architecture significantly influences various aspects of a software system. It impacts the system’s performance, scalability, maintainability, security, and adaptability to change.

Embracing the versatility of architectural styles is akin to choosing different brushes for a painting. The software architects are the artists, and the blueprint they select is their canvas. As software development progresses, these architectures are not just abstract concepts; they become the very foundation upon which the digital world evolves.

References:

From the blog CS-343 – Hieu Tran Blog by Trung Hiếu and used with permission of the author. All other rights reserved by the author.

The Art of Code Refactoring

Since we have been discussing refactoring in class recently, it got me interested in finding out more about what makes refactoring… well “refactoring”. I found this interesting article “Refactoring vs. Defactoring” by Nicolas Carlo, a French-Canadian Software Engineer, which describes the difference between refactoring and debugging while also introducing the idea of “defactoring”.

The article starts with the definition of refactoring, which according to Martin Fowler, is “a change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior“. In simpler terms, refactoring is all about tidying up the interior of a program while keeping the exterior the same.

Nicolas is clear however that fixing a bug, adding features, or changing features is not refactoring but points out the importance of refactoring code before altering the functionality of a program.

Nicolas states that in his experience things that can help with solidifying this distinction are to, start doing distinct commits separating both refactoring and change commits, make commits more frequently, prefix commit messages with R or C to specify the change(s) made, and learn how to use automated refactoring to improve the health of one’s code.

Nicolas also explains by following these practices he feels as if his work has become much safer and simpler than before. With his newfound awareness, he feels as if he can put the best quality into his work. He also gives a brief rundown of how thinking of Refactoring and Changes as two hats you wear when programming can also help increase developer awareness.

While we discussed refactoring, I thought it was interesting how Nicolas framed defactoring as an opposing process to refactoring in the title of the article but I came to find out it is not that at all. Defactoring is described by Nicolas as “cognitive refactoring” which is done by making the code less abstract in places where abstraction is no longer required.

He says that when working with legacy code, he notes some items such as temporary variables that are just not needed in places where they were necessary in the past. By altering code to remove such variables, Nicolas signifies this process as “defactoring” since it removes old abstractions that just are not needed anymore.

After reading this article I feel as if I have a stronger newfound understanding of the importance of separating refactoring from normal changes since it can make a dramatic difference in a program’s overall transparency. In my work alone, I have realized the importance of taking on one aspect at a time improves the cohesion and efficacy of the final product, but I never really thought about the importance of distinguishing changes and refactoring. Trying to be aware of this in the future will help me create the best version of my work possible by ensuring I have a more robust knowledge of a program’s behavior and added transparency of said program through my code alone.

Article Link: https://understandlegacycode.com/blog/refactoring-and-defactoring/

From the blog CS@Worcester – Eli's Corner of the Internet by Eli and used with permission of the author. All other rights reserved by the author.

Week 5 – A bit late but we’re getting there…

So it’s been a hot second since I set this blog up, and I apologize for the silence. Been busy focusing on homework and figuring out my work situation.

But with that aside, I just wanna talk about my past with GitHub and repositories before this class. I’ve actually used GitHub many times before, because I collaborate with a modding community. We focus on modding a video game known as Luxor, a classic PC game from the 2000s that I’ll share gameplay of below.

As for what a mod of this game entails, here’s an example of one of my favorites from recent, Hollow, made by my friend Dommo:

A lot of effort has been put into these mods, and I’ve contributed to a lot of them, and even made my own. I have no recordings of it, unfortunately, but I swear it exists, haha.

Though as of recent, we’ve been discussing how to properly archive mods. For the longest time, we’ve been using our Discord server for modding to store them, but that poses an issue: Many people might not have access to Discord due to their countries, operating systems, or various other reasons.

This led to some people moving over to GitHub, which was one of my first times learning how it actually properly worked. Before this, I simply downloaded stuff from it, but I learned the basics of how to push and pull repositories and have a local clone to work on and collaborate with multiple people.

Currently one of the biggest projects being developed using GitHub is OpenSMCE (https://github.com/jakubg1/OpenSMCE) which is a game engine being built off of the Love2D engine to allow us to have an opensource engine to work off of for our mods, as opposed to the limited and clunky engine we use currently with the original game.

The reason I discuss this is actually because the new information I’m learning in these classes is inspiring me to help work on and learn the process of being in a team working on a software/engine development with Jakub, the developer of OpenSMCE. This has been an application I’ve been very excited to see have a full release, and being able to say I contributed to it and helped it reach that state would be amazing.

Hopefully as the semester goes on, with the lessons I’m learning about how to create an application as well as work in a collaborative environment, I’ll end up contributing to this project, and maybe I can even use this blog as a way to discuss the ongoing developments and issues we’ve been facing with the development of OpenSMCE. It would be interesting, and I will probably reach out to Jakub within the next week about it.

Anyways, that’s all I have for this week, until next time!

-Tempura

From the blog CS@Worcester – You're Telling Me A Shrimp Wrote This Code?! by tempurashrimple and used with permission of the author. All other rights reserved by the author.

Data Redundancy – Relevance in Software Systems and Websites

In today’s world, businesses, organizations, and other entities that software and web developers consider “clients” heavily rely on being able to efficiently collect, access, and otherwise manage data for their day-to-day operations. For many, losing access to databases or similar outages hinders their ability to continue operations. In Data Redundancy: Meaning and Importance, author Charlotte White discusses data redundancy and some basic strategies and implementations to address these vulnerabilities.

Data Redundancy goes beyond simply having backups of existing data (although they’re an important component), it’s a proactive plan to prevent data loss and maintain smooth operations in the case of a server shutdown, hardware malfunction, or other major disruptive issue. It’s crucial for ensuring the continuity of business operations as website downtime often leads to financial losses especially for new websites or those with low traffic. Outages can also impact search engine rankings, as uptime is a factor commonly considered by search algorithms. Furthermore, failure to do so resulting in data loss can result in crashes/issues in other systems, loss of customer information, business details, and other critical and/or confidential information which is essential for an organization’s success and reputation.

How Redundancy Works: Effective redundancy designs reduce dependency on any single copy of data or data center. They commonly implement a 3-2-1 rule of backups, which means having three copies of data in two different locations, one of which is offline storage. Redundancy strategies should also consider factors like hardware redundancy; many servers use hard disk drives (HDDs) to store data which can fail due to simple wear and tear. Some hosting companies use RAID (Redundant Array of Independent Disks) and un-RAID solutions to mirror data from HDDs to other storage devices, minimizing the impact of HDD failures.

Recently in CS343, we’ve been looking at software architectures and strategies for organizing systems that could be realistically implemented to address clients’ needs. In particular, we’ve been considering the differences and strengths/weaknesses between a simpler architecture such as the Monolith versus a more complex architecture such as the MicroServices model, with several intercommunicating systems.

Most of the scenarios we discussed involved the ease of pushing out updates, but I was left wondering about the repercussions and ways to manage the possibility and reality of a database or system going totally offline. For businesses involved in eCommerce, uptime is money in terms of sales as well as maintaining search engine optimization. Given how damaging a disruption like this could be, data redundancy plans are an important consideration when planning and setting up a website or system. Understanding the value of D.R. and how they are implemented is an asset in planning and designing software systems and projects, and generally beneficial for computer science students and professionals.

Source:
1. Data Redundancy Meaning and Importance: A Complete Guide | ResellerClub India Blog

From the blog CS@Worcester – Tech. Worth Talking About by jelbirt and used with permission of the author. All other rights reserved by the author.

Week of October 9, 2023

https://www.atlassian.com/microservices/microservices-architecture/microservices-vs-monolith

Since learning about different software architecture styles like the monolithic architecture, the client-server architecture and the microservices architecture, I’ve been curious how large-scale applications transition from one architecture to another as the project grows in scale. I found this blog post on the Atlassian website breaking down the differences between the monolithic architecture and the microservices architecture, as well as telling the story of Netflix’s innovative migration from a monolithic architecture to a microservices architecture.

The article begins with the example of Netflix’s transition between architectures. Netflix was growing rapidly by 2009 and needed to expand its software infrastructure to meet the massive demand. Before “microservices” as a term was in wide usage, Netflix was one of the first major companies to migrate to a microservices architecture, and in 2015 earned a JAX Special Jury award for its successful deployment. Netflix’s new architecture would model itself on DevOps, defined by Amazon as “the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity (https://aws.amazon.com/devops/what-is-devops/#:~:text=DevOps%20is%20the%20combination%20of,development%20and%20infrastructure%20management%20processes.)”.

Following the story of Netflix’s change in infrastructure model, the article continues with an explanation of the monolithic architecture style. The monolithic architecture is a traditional design, which is defined by the application being housed in a single self-contained server. This architecture is simple to understand and easy to use as a foundation for your application. The major drawback of the monolith, however, is the difficulty in updating the application. Making changes to the code base requires bringing the entire service offline. Monolith architectures are not scalable with the growth of the application either.

The microservices architecture addresses some of the disadvantages that come along with a monolith architecture. The application is divided into independent services each with their own databases and methods. With this architecture model, only the components of the application that require changes need to be taken down, leaving the other components of the application free to continue working.

One inherent barrier to using the microservices architecture is the expense of multiple machines to host the different microservices, as well as storage space for their accompanying databases. It may only be beneficial for an application to transition to a microservices model once it has reached a certain scale. Small to mid-size applications may be perfectly well served by monolith architectures for much less cost than hosting the application across a microservices architecture.

From the blog CS@Worcester – Michael's Programming Blog by mikesprogrammingblog and used with permission of the author. All other rights reserved by the author.

Software Development Methodologies

The blog I have chosen to write about this week is a article called “12 Best Software Development Methodologies” by intellectsoft to research and learn more about the different types of software development methodologies. The reason why I chose this article in particular is how in depth it goes into each methodology, the pros and cons, and when to choose specific methodologies. The article mentions which I found very interesting is how as software becomes more advance with time, companies spend more and more on researching and improving past development methods.

In class this week we have gone over what the steps are in software development as well as waterfall and agile methodologies. When reflecting on the waterfall technique, it does not seem very practical to use with changes not being possible without restarting the entire development process again, however the article states how there are relatively no financial risks “due to the high planning accuracy” and every step has a given deadline. In addition, the long delivery time may be caused if not everyone working on the project are not on the same page. This method is not suitable for larger or on-going projects.

The agile development method that focuses on the project / product itself. In class, we watched a video on Agile of what the sets of values Agile had which included “Individual and interaction over processes and tools” as well as “Responding to change over following a plan”. As Agile is a very flexible and on-the-go methodology it risks insufficient budget predictability. The article states how this method “fits, young companies … open for communication” which provides top of the line product quality.

A method that I found interesting that was not talked about in class is the Spiral Development Model. This method seems to be a “hybrid” of both waterfall and agile. Like the waterfall method this method is done in phases as well as has an emphasis risk management. In addition, similar to the agile method it has client collaboration throughout the process in increments. This method appears to have the best of both worlds of these two methods however it is not suitable smaller projects.

When reflecting on software development processes there are many different factors to consider when following a methodology. Some of these factors include cost, project size, risk tolerance, client suggestions and so forth. When looking at smaller projects that I may collaborate with a team I may consider using agile methodology or lean development which both receive feedback throughout the process as well as flexibility.

Link https://www.intellectsoft.net/blog/top-12-software-development-methodologies-you-should-know/

From the blog CS@Worcester – Anthony Duong CS Blog by anthony duong and used with permission of the author. All other rights reserved by the author.

Software Development Methodologies

A software development methodology is the methods and the different sets of workflow techniques that are used in order to design different I.T. software solutions. As I’ve been learning, there are many different types of these methodologies. According to Synopsys, the four most popular methodologies are Agile development methodology, DevOps deployment methodology, Waterfall development method, and Rapid application development. In my class, I’ve been learning about the Agile development methodology and the Waterfall development method. I think it is really interesting how there are so many different methods in developing software that all lead to the same outcome, and I’m interested in learning more about other methods. I think it will be very helpful to learn these different methodologies as some are more effective in certain situations. Synopsys has really helped me because it shows two of the methods I already know and allows me to get a better understanding of them, but also allows me to learn a couple new methods.

The first methodology that I learned about, the Agile development method, involves repeating all of the steps in short increments. The main reason that this method is used is to help minimize risk when adding new functions to the software. These risks can include bugs, changing requirements, and cost overruns. Like all of the methods, the Agile development method has both pros and cons. Some pros include allowing the software to be released in different iterations, not just one. This helps the developers fix bugs and change things early on in development. Another pro is that it allows users to learn the benefit of the software earlier, rather than waiting for the entire software to be incremented. However, there are also some cons to this methodology. One major con is that new users are often behind and not able to get up to speed, as they don’t have the documentation needed. This is because it relies on real-time communication. Another major con is that Agile development methods may not be as helpful in large organizations as they are used to other methods, such as the waterfall method.

The second method that I learned about is known as the Waterfall method. Unlike the Agile method, the Waterfall method performs each step in order and doesn’t move on to the next step until the previous step is completed. On top of that, each step is only performed once. The Waterfall method consists of different steps, otherwise known as sequential phases, which each have their own goals. These phases are Requirements, Design, Implementation, Verification, Deployment, and Maintenance. This method is also known to many as the traditional method, as many different companies use it. A major pro to using this method is it being easy to understand and manage, as there is only one step with one goal going on at a specific time. This allows less experienced teams and managers to understand and benefit from this method. A major con to this method is that is can be very slow and also very costly. This is because of the way that it is structured, as well as its tight controls. I haven’t learned about the other two methods on Synopsis, the DevOps deployment methodology and the Rapid application development, but I think this website is a great source to learn more about them, and I’m excited to do some more research on these topics.

Synopsis: https://www.synopsys.com/blogs/software-security/top-4-software-development-methodologies.html

From the blog CS@Worcester – One pixel at a time by gizmo10203 and used with permission of the author. All other rights reserved by the author.

Exploring Git

Addressing Merge Conflicts on GitHub

In the fast-paced world of software development, collaboration is the lifeblood that fuels innovation. One of the most powerful platforms for collaborative coding is GitHub, a platform that has revolutionized the way developers work together on projects. However, this collaborative utopia is not without its challenges, and one of the most notorious hurdles developers face is the dreaded “merge conflict.” To shed light on this issue, I recently explored the blog post titled “How to Resolve Merge Conflicts in GitHub” on HubSpot’s website. In this blog post, I will share my insights and lessons learned from this valuable resource.

Understanding Merge Conflicts

The HubSpot blog post begins by elucidating the fundamental concept of merge conflicts. A merge conflict transpires when two or more contributors to a Git repository make concurrent modifications to the same file or lines of code. When an attempt is made to merge these conflicting changes, Git finds itself in a quandary, unable to discern which version to prioritize. The result? A conflict that requires manual resolution.

Resolving Merge Conflicts

The most invaluable takeaway from the blog post is the comprehensive guide on resolving merge conflicts. Here’s a summary of what I’ve learned:

  1. Identify the Conflict: GitHub will promptly notify you if your PR encounters a conflict. These conflicts usually arise in files that have undergone concurrent modifications on different branches.
  2. Locate the Conflict: Git marks the conflicting sections within your code with distinctive markers, including <<<<<<< HEAD=======, and >>>>>>> branch-name.
  3. Manually Resolve the Conflict: The HubSpot blog offers a detailed walkthrough on how to address these conflicts manually. It involves carefully reviewing the conflicting code, deciding which changes to keep, and removing the markers.
  4. Commit the Changes: After successfully resolving the conflict, you need to stage the modified file using git add and commit the changes using git commit -m "Resolved merge conflict in file-name".
  5. Update the Pull Request: By pushing the resolved changes to your branch with git push, the blog explains how to update your PR automatically with the resolved conflict.
  6. Merge the Pull Request: Finally, once the conflict is resolved, and your PR meets all criteria, it can safely merge into the main branch, ensuring the seamless progression of the project.

    Through this journey, I’ve not only learned how to merge an issue on GitHub but also gained insights into resolving conflicts using Git. So, the next time I encounter a merge conflict, I won’t panic—I’ll simply follow these steps, and I’ll be well on my way to becoming a Git pro. Happy coding!

    My blog was based on the new things that I read on this blog link https://blog.hubspot.com/website/merge-conflicts-github

From the blog CS@Worcester – Coding by asejdi and used with permission of the author. All other rights reserved by the author.