Category Archives: CS@Worcester

Clean Code: The Foundation to Readable Organized Code

As I look back at my older projects and code, the lack of organization and structure is missing. I’m thankful for my detailed comments, because that code would have taken much longer to read. In our course, we went over the principles and practices associated with writing and abiding by “clean code” strategies. To deeper my knowledge on this matter I ended up finding an article called “How to Write Clean Code – Tips and Best Practices(Full Handbook),” by German Cocca. I trust this resource, it comes from freeCodeCamp. I have used this website in the past and I think it is a very useful source of free information. I also trust the author because he is a full stack developer. This comes from his own blog.

This article states clean code is more than just code that can run and function. Clean code should be very easy to read, understand, maintain overtime, and breakdown. His pilers of clean code are effectiveness, efficiency and simplicity. While the focus of codding should always be the functionality of the code, this should also be done so wile optimizing resource usage and while maintaining clarity.

The most important information I took from learning about clean code in the class room, was the idea that If you have to comment code, you didn’t write it clear or efficient enough. This also goes with functions, functions should be kept as small as possible. This kind of thinking helped me step back and re-evaluate my codding approach. Because of this I feel i write more readable efficient code now.

This article also goes over a very important idea that had a similar effect on my codding as the last. This is the idea of SRP or single respnibility principle. This means every clas or module should onley have one job. If you need to validate orders, calculate a total, save data,these should all be done in their own separate methods classes or functions. This makes the code much more readable and it makes implementing these functions easier.

So far these concepts have also ben dissgussed or touched upon in my class, The concept of modularization was not. This involves breaking down complex code into much smaller pieces. This makes the code easier to test and understand. This allows you to test maintain and read the code all more efficiently. Folder structure was also not talked about in my class. Folder structure is crucial for keeping a clean scalable codebase. The structure should keep related files together based on their functionality. For example instead of organising by filetype, you would organise by feature types. If every feature has its own place, it will be easier to go back and modify it.

I enjoyed looking into this site because It layed everything out in a nice organised manor. It explained everything briefly enough to maintain my interest, but was indepth enough where I was getting the information and knowledge I needed. This website also provided nice code  examples for everything it mentions. 

Reference:

https://www.freecodecamp.org/news/how-to-write-clean-code/ – How to Write Clean Code – Tips and Best Practices(Full Handbook) by German Cocca

Tags: CS@Worcester, CS-343, Week-11

From the blog CS@Worcester – SPM blog by Aaron Nano and used with permission of the author. All other rights reserved by the author.

The Most Useful Tool In a Developers Toolkit: Development Environments

Intro

Choosing a development environment to use is a decision that can be made based on feeling, or by taking the time to think out each choice and analyze which best fits your needs. Either way the environment that a developer uses is what is super important as it’s where all code is made in any project, making it the tool that every developer spends the most time using. It’s a personal choice and this blog by Matthew LeRay goes over everything you need to know about developer environments.

Summary of Source

This blog encompasses all that is needed to know about developer environments, including their purpose, importance, and what IDE’s are with some examples. The main sections are:

  1. Definition and Purpose: A structured setup of tools and processes that enhances software creation by automating tasks, supporting debugging, and ensuring consistency with production.
  2. Types of Development Environments: Explains the purpose and distinct roles of development, testing, staging, and production environments.
  3. Integrated Development Environment (IDE): The evolution of IDE’s and what they offer such as speed/efficiency and customizable features.
  4. Setting up a Development Environment: Goes through the steps of configuring your environment, from choosing the IDE, to configuring it and using tools like build automation.

The Reason I Chose This Source

For any new programmer, looking for an IDE to use can be confusing because of the lack of knowledge about what they even are, mixed with the daunting task of choosing one to learn and use. I chose this blog because it bridges that gap of being a new programmer having no idea what a developer environment even is, to choosing and setting up their IDE. It’s a very reader friendly resource, that even some experienced developers could learn from.

A Reflection of IDE’s

I personally use Visual Studio Code for the majority of what I program, but have used IntelliJ as well. I chose my IDE more based on appearance and general word of mouth, which is why I gravitated towards VS code, as it’s arguably the most popular and user-friendly IDE. I do like IntelliJ as it feels good to use, and although a drawback to others might be it’s a java only IDE, I only use java so that isn’t a problem for me. VS code also has a great variety of personalization options because of the extensions tab in the IDE. Extensions are great for not only appearance but also functional improvements. I think extensions are a big reason VS code is so popular, as well as its ability to support many languages, not restricting it to one the same way IntelliJ does. An IDE encompasses a ton of different tools a developer uses, so picking one that fits your needs is important. Becoming comfortable and familiar with the IDE you use is more important than switching to the “best” IDE for a developer based on some abstract metrics that others believe is the most important thing to have in an IDE.

My Future IDE Plans

I think I will continue to use VS code for now, but I can see myself trying out more technical and not-so user friendly IDE’s like Vim in the future. There really isn’t a need to switch it up if it’s working and honestly I don’t think it should be switched often. I will also probably utilize IntelliJ more as I do think it’s the best IDE for java which is the language I use most often.

Citation

Understanding Modern Development Environments: A Complete Guide by Matthew LeRay

From the blog CS@Worcester – The Science of Computation by Adam Jacher and used with permission of the author. All other rights reserved by the author.

SOLID Principles Made Easy

As my year progressed I wanted to better understand the basic principles of software design and sought out the SOLID principles after hearing about them throughout college. I had not yet put the time into properly learning and practicing them and came across a blog putting the principles into text simply and with excellent examples. The article’s title is The SOLID Principles of Object-Oriented Programming Explained in Plain English By Yiğit Kemal Erinç

The article begins by identifying what the SOLID principles are and their importance. These principles are five key guidelines for object-oriented design that help developers create maintainable, flexible, and understandable code. These principles are provided in great detail, with several coding examples and potential drawbacks or dangerous pitfalls. They are the:

  1. Single Responsibility Principle (SRP): A class should have one responsibility and one reason to change. By adhering to the SRP, you minimize complexity, avoid conflicting changes, and simplify version control and collaboration.
  2. Open-Closed Principle (OCP): Classes should be open for extension but closed for modification. This means new functionality can be added without altering existing code. This is often done by using interfaces or abstract classes. 
  3. Liskov Substitution Principle (LSP): Subtypes must be substitutable for their base types without altering the correctness of the program. 
  4. Interface Segregation Principle (ISP): Clients should not be forced to depend on interfaces they don’t use. This principle advocates for creating small, specific interfaces rather than large, general-purpose ones. It ensures that classes only implement methods relevant to their functionality, and there is less bloat and redundancy in the code.
  5. Dependency Inversion Principle (DIP): High-level modules should not depend on low-level modules, but both should depend on abstractions. This means This promotes flexibility and low coupling, making systems easier to extend and modify.

The key takeaway presented in the conclusion is that by following SOLID principles, developers can write cleaner, more testable, and scalable code.

The SOLID principles are essential and key learning blocks for building efficient code. This is a topic that has been addressed in class as design smells and patterns. By learning this through the lens of SOLID I was able to reconcile the importance of that topic in building good code. Through the CS-343 course, I was looking to understand those design patterns better, and by understanding them as SOLID principles I am able to better grasp them. I will be prepared to answer questions regarding SOLID principles in context thanks to this article in combination with the class material.

Source:https://www.freecodecamp.org/news/solid-principles-explained-in-plain-english/

From the blog CS@Worcester – WSU CS Blog: Ben Gelineau by Ben Gelineau and used with permission of the author. All other rights reserved by the author.

The Hidden Backbone of Software Development: Software Documentation

Intro

In software development, documentation often takes a backseat to coding, testing, and deploying. However, software documentation is the backbone of more easily maintainable, scalable, and collaborative projects. This blog post by David Oragui gives useful information on why documentation is essential and how it supports both developers and end-users throughout the software lifecycle.

Summary of Source

The blog post explores the role of software documentation and offers practical advice on creating and maintaining it effectively. The main sections are:

  1. What is Software Documentation?: A definition and explanation of how it serves as a guide for developers, users, and stakeholders, providing clarity on a system’s functionality and usage.
  2. Types of Documentation: A breakdown of key categories, including user documentation, developer guides, technical documentation, and process documentation.
  3. Best Practices for Writing Documentation: Practical tips, such as structuring content logically, using plain language, and keeping documentation up-to-date.
  4. Using Software Documentation Tools: The different type of documentation tools and the reasons to consider them including automation, collaboration, and accessibility

Why I Chose This Blog

I selected this blog because it is a concise resource that explores all there is to know about documentation, making it a great guide to refer back to when needed. In my coursework, the focus has largely been on coding, but I’ve noticed that a lack of proper documentation can make even the best-written software hard to use and maintain. This blog stood out for its clear and actionable advice, which is especially valuable as I create projects and prepare for internships.

Reflection

The blog’s structured approach to explaining software documentation makes it great as an introductory resource. One section that particularly stood out was the breakdown of the different documentation types. It clarified the different audiences for documentation end-users, developers, and stakeholders, and how each requires tailored content. For example, user documentation should be simple and accessible, while developer guides need to be more technical and detail-oriented.

This difference in target audiences was an eye-opener, even though it does seem obvious when it’s said. There’s no reason to have documentation about technical details that an end-user will see because they won’t understand it or even need to. Whenever I thought about software documentation before, it was just a one size fits all document that explained the technical details of the software.

Another valuable takeaway was the emphasis on keeping documentation up to date. It made me consider the consequences of never updating documentation and having incorrect information that could end up causing a lot of trouble. When the point of documentation is to make it easier for all parties to understand a project, outdated docs would contribute to the very problem it’s trying to solve.

Future Application

Moving forward, I plan to apply these best practices to my projects. I will make sure to make different documentation resources for different target audiences, also making sure to update the docs every time a change is made to make sure there is no confusion with outdated information. Software Documentation simply makes the whole process of understanding and maintaining code a lot easier so there is really no downside to adding quality documentation to a project.

Citation

Software Documentation Best Practices [With Examples] by David Oragui

https://helpjuice.com/blog/software-documentation

From the blog CS@Worcester – The Science of Computation by Adam Jacher and used with permission of the author. All other rights reserved by the author.

Semantic Versioning or Updating Numbers?

Versioning is an important role in development, as we know it offers a clear framework for tracking changes and updates to code. One system that has become widely adopted is semantic versioning, which provides a standardized approach to naming software releases. For this blog entry, I chose to review the AWS blog post on semantic versioning. This resource offers a detailed exploration of how Semantic Ver. simplifies release management and how it ensures consistent communication among developers, operations teams, and users.

The Article

The article introduces semantic versioning as a three-part versioning system—MAJOR.MINOR.PATCH (e.g., 2.3.1), this is where each segment is specific to types of changes:

  • MAJOR: Incremented when incompatible API changes occur.
  • MINOR: Updated when new features are added in a backward-compatible way.
  • PATCH: Increased when bug fixes or minor changes that don’t affect the API are implemented.

The blog highlights how this system helps manage software lifecycles by making the scope of changes transparent, which supports with better collaboration between teams. It also emphasizes the importance of tagging releases in version control systems like Git to align codebase changes with their respective version numbers.

The article also showcases real-world applications of semantic versioning in continuous integration/continuous delivery (CI/CD) pipelines. For example, tools like AWS CodePipeline and CodeDeploy can use version tags to automate deployment processes, ensuring consistency and reducing the likelihood of errors.

Why I Chose This Resource

In class, we learned about the importance of versioning in software projects and how clear communication of changes can prevent confusion, particularly in team environments. Semantic versioning was introduced as a standard approach, but I wanted to explore its real-world applications further. This AWS blog post stood out because it not only explained SemVer but also demonstrated its practical use in release management and CI/CD pipelines, areas that wasn’t relevant to the current coursework.

Personal Reflections

The resource clarified how semantic versioning improves team collaboration by setting clear expectations about software updates. For instance, knowing that a MINOR update is backward-compatible or that a MAJOR update might require significant adjustments removes ambiguity for developers and users.

One aspect that particularly resonated with me was the integration of semantic versioning into CI/CD workflows. The article’s example of automating deployments based on version tags helped me understand how practices can streamline release management, reducing manual errors and accelerating delivery timelines. I had not considered how tools like AWS CodePipeline could interact with semantic versioning to achieve this level of efficiency…

Future Practice(s)

I plan to adopt semantic versioning in all future projects, starting with any academic group work. For example, using version tags in Git will help my team better manage changes and understand the implications of updates. Additionally, I want to experiment with automating deployments using CI/CD tools like GitHub Actions or AWS CodePipeline, as the article suggests.

Whether contributing to open-source projects or collaborating in a work environment, semantic versioning will help me communicate changes clearly and maintain quality and control.

Conclusion

The AWS blog post brought up the importance of semantic versioning as a tool for simplifying release management and fostering collaboration. I think this not only deepened an understanding, but also inspired me to integrate these practices into my current and future workflows. Semantic versioning is more than just a numbering system I think it’s a critical framework for ensuring clarity, consistency, and efficiency in software development. Thank you for reading my blog!

https://aws.amazon.com/blogs/devops/using-semantic-versioning-to-simplify-release-management

https://www.geeksforgeeks.org/introduction-semantic-versioning/

From the blog CS@Worcester – function & form by Nathan Bui and used with permission of the author. All other rights reserved by the author.

Transparency and Autonomy: Better Together

In continuing my research on team management strategies, I delved deeper and more specifically into the Software Development side of team management. In doing so I discovered the scrum.org blog, which has many different articles based on understanding Scrum. Two of the most important principles in Scrum are transparency and autonomy, and I wished to delve deeper into understanding how to achieve those in a team setting. The article I found explained how those two play into each other greatly. The article’s title is Transparency and Autonomy: Two Sides of the Same Coin by Sanjay Saini

The article begins by explaining agile’s fast-paced style of producing working code. How autonomy and independence can be essential for fast results. The article explains that in Agile, teams seek this autonomy to make decisions and deliver value without excessive oversight. Transparency can become essential for fostering this autonomy. The article explains that by making work visible, tracking progress, and openly addressing challenges, more autonomy and trust can be given. The article highlights five key points to help allow for this trust and efficiency:

  1. Visibility Creates Trust: By sharing progress and challenges during Scrum events like Daily Scrums and Sprint Reviews, it shows that the team is accountable and can be trusted to be autonomous.
  2. Transparency in Challenges Leads to Solutions: Being open about struggles encourages collaboration and problem-solving, proving the team can manage setbacks and seek out help when they need it independently.
  3. Data-Driven Transparency Builds Confidence: Using metrics like velocity and burndown charts shows consistent results, building leadership confidence in the team’s capability.
  4. Transparency Causes Better Decision-Making: When a team has full visibility into goals, priorities, and feedback, they can make informed decisions independently. Information needs to be freely shared for autonomy to occur, and good decisions.
  5. Open Communication Builds Long-Term Autonomy: Regular, open communication about decision-making processes helps cultivate trust and secure more autonomy over time, as the team can continue to build trust through constant demonstration of these values.

The article concludes by saying that, transparency creates a culture of trust and accountability, enabling Scrum teams to earn the autonomy needed to make decisions and drive value.

This article helped me understand the importance of these values to a Scrum team’s operation. This is a key step in understanding Scrum’s importance in the operation of a team, as things such as transparency can make a smoother work environment for everyone by providing autonomy. Next in my blog, I will look into articles relating to development environments such as Docker or GitPod and their importance for maintaining a productive team.

Source:

https://www.scrum.org/resources/blog/transparency-and-autonomy-two-sides-same-coin

From the blog CS@Worcester – WSU CS Blog: Ben Gelineau by Ben Gelineau and used with permission of the author. All other rights reserved by the author.

Merge Conflicts

I believe that using systems like Git are an important tool for developers. Yet, one of the more challenging aspects of working with Git is resolving merge conflicts, an occurrence in collaborative projects. For this blog entry, I chose to review the Graphite guide on resolving merge conflicts. This resource provided a clearer, step-by-step approach to handling merge conflicts, and I found it both insightful and practical after learning it through homework and in class.

Guide of Merge Conflicts

The guide explains the basics of merge conflicts in Git, outlining what they are and why they occur. It details the types of conflicts, these arising from edits in the same line of code or overlapping changes across different branches. Resolving these conflicts using Git commands like git status and git diff to identify issues and git merge to bring changes together. The guide concludes taht with best practices to prevent merge conflicts, such as pulling the latest changes regularly, using feature branches, and maintaining clear communications within a team.

Why I Chose This Resource

I chose this resource cause it was a little confusing at first. After reading/researching multiple articles and websites like this one it refreshens your knowledge. Now I know that merge conflicts are a just not a concept we’ve discussed, but we learned about the importance of version control in collaborative coding environments. We learned how tools like Git enable teamwork by allowing simultaneous contributions, but we also explored how conflicts can arise when changes overlap. Despite this, this has to be one of the most stressful aspects of group projects.

Personal Reflections and Insights

Reading this guide helped de-reconstruct merge conflicts. I particularly liked the detailed explanations of the commands, as it’s easy to misuse or misinterpret them when under pressure or when you are clueless. While I’ve often focused on “fixing the conflict,” I’ve ignored on verifying how the changes interact, which has caused issues in past projects.

Another valuable takeaway I think was the important of adopting preventive measures. In class, we learned about best practices like pulling changes frequently and using feature branches, but this guide provided additional context that made these tips feel more somewhat actionable.

Future Practice

I want to apply this knowledge in upcoming group projects. Whether working on a shared repository for class or contributing to open-source projects, knowing how to resolve merge conflicts efficiently will save time and reduce confusion. This guide also inspired me to explore additional tools, like Visual Studio Code’s merge conflict interface, to streamline the process further. By combining these technical skills with teamwork, it will be better prepared to contribute effectively in collaborative environments understanding resolving merge conflicts.

https://graphite.dev/guides/how-to-resolve-merge-conflicts-in-git

From the blog CS@Worcester – function & form by Nathan Bui and used with permission of the author. All other rights reserved by the author.

Software Design Principles

For this week, I decided to find a blog about Software design principles to refresh my mind on the topic. I found a blog called “Main Software Design Principles Every Developer Should Know” by Amr Saafan. The reason why I decided to discuss this blog Specifically is because it breaks down each section in short simple bullet points.  

The first section is Object-Oriented Programming (OOP) principles. This section explains principles like Encapsulation which guarantees that objects securely manage their data and behavior while concealing their complexity and granting limited access. Next is abstraction- by concealing details and concentrating on key elements, abstraction makes problem-solving easier and allows for solutions that are extendable and reusable. Inheritance encourages code reuse by enabling classes to share properties and functions, whereas polymorphism enables flexible behavior depending on object types, increasing flexibility and minimizing code duplication. I remembered these principles the most, possibly because I’ve had plenty of discussions about it, however, I found the explanation of encapsulation very helpful.

Software design is further improved by the SOLID principles. In order to improve testability and maintenance, the Single Responsibility Principle (SRP) promotes modular classes with distinct responsibilities. In order to maintain stability, the Open/Closed Principle (OCP) advises adding functionality through new code as opposed to changing current code. The Interface Segregation Principle (ISP) promotes short, targeted interfaces, minimizing needless dependencies, whereas the Liskov Substitution Principle (LSP) preserves compatibility between base and derived classes. The Dependency Inversion Principle (DIP), which prioritizes abstractions over concrete implementations, encourages loose coupling. I needed a refresher of LSP and ISP the most out of this section and I found the explanations to be helpful. 

The next section (Don’t Repeat Yourself (DRY)), explains how it is useful to eliminate redundancy, which promotes simpler, maintainable, and error-resistant code through modularity and reuse. (Keep It Simple, Stupid  (KISS)) promotes straightforward, effective solutions that are simpler to comprehend and maintain, while discouraging overengineering. The You Aren’t Gonna Need It (YAGNI) principle suggests avoiding unneeded features in advance, simplifying things, and concentrating on urgent needs. Those principles felt self explanatory to me but they were explained well. The Law of Demeter (LoD) improves modularity and decreases coupling by restricting an object’s interactions to its immediate dependents. I definitely needed the explanation of this principle. I found it helpful to have the list of how to apply it which included items like Limit Direct Dependencies, Use Interfaces and Avoid Chain Calls. It aided in my understanding of the principle.

Next, the Composition Over Inheritance principle suggests building systems with reusable components rather than deep inheritance hierarchies, improving flexibility and modularity. Finally, the Principle of Least Astonishment (POLA) ensures software behaves as expected, minimizing confusion by being clear, consistent, and intuitive.

As a software developer, these design principles will help design scalable, flexible, and user-friendly systems. KISS and YAGNI simplify designs to avoid unnecessary complexity, while composition and encapsulation add flexibility. Following POLA makes the system easier to understand and use for both developers and users.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

Masters in Scrum

One method I’ve encountered repeatedly in both my coursework and during discussions with peers is Agile—specifically, the Scrum framework. To better understand it, I recently read an article titled “Scrum Mastering the 3 Pillars, 5 Values, and 7 Key Principles of Agile Project Management”, which provides a clear breakdown of how Scrum works and why it’s so effective in software development. I found this resource insightful, and it’s something I can definitely apply in my future

The article explains the fundamental elements of Scrum, which include the 3 Pillars, 5 Values, and 7 Key Principles that form the foundation of this Agile framework. The 3 Pillars—Transparency, Inspection, and Adaptation—ensure that the process is open, regularly assessed, and flexible. The 5 Values—Commitment, Courage, Focus, Openness, and Respect—help create a collaborative and supportive team environment. Finally, the 7 Key Principles emphasize continuous improvement, self-organizing teams, and the importance of simplicity in problem-solving.

I selected this article because, as a beginner in computer science, I wanted to understand how project management frameworks like Scrum can be applied in real-world software development. Being new to coding and programming, I often feel overwhelmed by the amount of information and tools available. Scrum, with its structured approach, offers a clear way of organizing tasks, fostering teamwork, and ensuring that progress is continually monitored. Learning about Scrum is relevant to my future career because it’s widely used in the tech industry, particularly for software development and managing complex projects.

From reading the article, I gained a solid understanding of the core principles that make Scrum effective. The 3 pillars stood out to me, especially Transparency. As a student, I can relate to the importance of transparency in team projects where communication is key to understanding who’s doing what, when, and how. Inspection and Adaptation also made me realize how crucial it is to frequently check our progress and be willing to change course when necessary, which can save a lot of time and effort in the long run.

The 5 Values were a reminder of the importance of collaboration and maintaining a positive, respectful team environment. These values are essential, not just for Scrum but for any professional setting. I particularly appreciated the focus on Courage, which resonated with me as I’m still learning how to approach new and challenging problems in my coursework.

Finally, the 7 Key Principles reinforced the idea of simplicity and the need to avoid overcomplicating solutions, something I’ve noticed in my own work when I get caught up in trying to build complex solutions rather than focusing on what’s truly necessary.

I plan to apply the principles of Scrum, especially the importance of adaptation and simplicity, in my future projects. Whether it’s a group coding project or individual work, Scrum’s emphasis on regular inspection and continuous improvement will help me ensure that I’m always learning and adjusting as I go.

Resource:

“Scrum Mastering the 3 Pillars, 5 Values, and 7 Key Principles of Agile Project Management”

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.

Software Maintenance

Source: https://www.geeksforgeeks.org/software-engineering-software-maintenance/

This article is titled “Software Maintenance – Software Engineering.” Software maintenance “refers to the process of modifying and updating a software system after it has been delivered to the customer.” There are many different aspects involved in this including: fixing bugs, adding new features, and keeping up with new hardware and software requirements. Maintenance is very important for ensuring that software is able to last long. This process can be expensive and complex, so these factors must be taken into account during the planning of a software development project. The important tasks in regard to software maintenance are: bug fixing, enhancements, performance optimization, porting and migration, re-engineering, and documentation. Summarizing these tasks, it is important to find and fix errors quickly, add new features/improve existing ones, improve the performance of the software, adapt the software to run on different hardware, improve the design, and maintain accurate documentation of all of these processes. There are quite a few different types of software maintenance, but they can be categorized into proactive and reactive types. “Proactive maintenance involves taking preventive measures to avoid problems from occurring, while reactive maintenance involves addressing problems that have already occurred.” Maintenance can be done by stakeholders, the development team, a third-party, and they can be both planned or unplanned. Planned maintenance can be described as regular maintenance (bug fixes) while unplanned maintenance can be described as reactive maintenance that occurs when something unexpected happens. Maintenance can fall into these different categories: corrective maintenance, adaptive maintenance,  perfective maintenance, and preventive maintenance. Corrective refers to fixing bugs and enhancing performance of the system. Adaptive refers to modifications being made when a customer needs the software to run on a different system. Perfective refers to the adaption of the software when a customer has a demand. Lastly, preventive maintenance refers to modifications that focus on the prevention of future issues with the software. Software maintenance is important but there are some things to consider: the cost, complexity, possibility of new bugs, users not updating the software, compatibility, technical debt, and end-of-life (where maintenance isn’t possible anymore or cost-effective).

I chose this article because I found it in the syllabus and thought the topic to be interesting. We are always learning about the development of software, but the idea of maintaining it over the long term isn’t as heavily considered. A large part of the work of a software development team is to obviously develop software but it is also important to learn about how it can maintain a sense of longevity free from error and customer complaints. I will keep the information I learned from this article in mind in future projects and when I’m working with a team to ensure that I’m developing software all the while keeping maintenance in mind. If it is considered during the development process, the maintenance process will be much easier.

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.