Category Archives: CS-348

Teamwork and Project Management

https://www.geeksforgeeks.org/software-engineering/software-engineering-software-project-management-spm/

When learning about the entire software building process and an Agile framework, we learned about a better and more efficient way of developing a project as a team. The overarching theme for having a well maintained project that stays its course is having a good software project management system in place. It is important because software is intangible, making it difficult to visualize progress or quality without strict oversight.This article from GeeksforGeeks discusses Software Project Management, a discipline within software engineering focused on planning, implementing, monitoring, and controlling software projects. The goal is to deliver high-quality software on time and within budget by effectively managing resources, risks, and changes.

The practice encompasses several critical aspects, starting with detailed planning to outline scope and resources, followed by the active leading of diverse technical teams. Managers must oversee execution through rigorous time and budget management while also handling maintenance to address defects early. To achieve this, project management employs specialized management strategies, including risk management to minimize threats, conflict management to resolve team disputes, and change management to handle shifting goals. It also involves technical controls like configuration management to track code versions and release management to oversee deployments. Some drawbacks are touched on being that the process can add complexity and significant communication overhead especially with large teams. 

I think it’s important to understand the different aspects of project management and what goes into creating a project as a team.Working as a team is critical in software engineering because modern projects are often too complex and massive for any single individual to handle efficiently. By dividing tasks, teams can work in parallel, allowing features to be built, tested, and deployed simultaneously which significantly speeds up the development process. Beyond just speed, teamwork improves code quality through practices like peer reviews and pair programming, where “multiple eyes” on the code help catch errors that a solitary developer might miss. It can be easy as a student to think that getting in this field will mean sitting behind a desk and working on your own aspect of a project, however, working within a team and adhering to the group structure and work flow management can be a shock to people new to not just the software field but to the workforce in general. When working in a large team it can be easy to stray from the  goal or specifications without strict planning and oversight. Software project management provides the necessary framework to navigate this, ensuring that unique client requirements are met precisely rather than relying on assumptions. 

From the blog Anna The Dev by Adrianna Frazier and used with permission of the author. All other rights reserved by the author.

Understanding Technical Debt: Why It Actually Matters More Than We Think

When I first heard the phrase “technical debt,” I honestly thought it was just a fancy developer way of saying “bad code.” But after reading Atlassian’s article “Technical Debt: What It Is and How to Manage It,” I realized it’s way deeper than that. The article explains technical debt as the cost of choosing a quick solution now instead of a cleaner, long-term one. This can come from rushing to meet deadlines, adding features without proper planning, skipping tests, or even just writing code before fully understanding the problem. What I liked about this resource is that it breaks the topic down in a way that makes sense, showing how debt doesn’t always come from laziness, sometimes it’s just the reality of working in fast-paced software development.

I picked this article because technical debt is something we’ve basically been talking about throughout CS-348, even if we didn’t always call it that. Whether it’s writing maintainable code, designing clean architecture, or keeping up with version control, everything connects back to avoiding unnecessary debt. I’ve heard instructors and even classmates say, “We’ll fix that later,” and this article made me understand the impact behind that mindset. It stood out to me because it not only defined the problem but walked through how teams can recognize debt early and avoid letting it build up until it becomes overwhelming.

Reading this article made me realize how much technical debt affects the entire development process, not just the code. It slows teams down, creates frustration, and makes simple tasks more complicated than they should be. One part that hit me was how the article described debt snowballing over time. It reminded me of school assignments: if you ignore a confusing part early on, it always comes back to make the whole project harder. Another point I loved was the idea of being honest about debt instead of acting like it doesn’t exist. Communication is a big deal in development, and the article made that very clear.

Moving forward, I’m definitely going to be more intentional about how I write and manage code. Instead of rushing through things just to “get it done,” I want to slow down and think about how my decisions today could affect future work, both for me and for anyone else who touches the code. Good documentation, regular refactoring, testing early, and asking questions when something feels off are all habits I want to bring into my future career. Understanding technical debt helped me see software development as a long game, and being aware of these trade-offs will help me build better, cleaner projects in the future.

Source:

https://www.atlassian.com/agile/software-development/technical-debt

From the blog CS@Worcester – Circuit Star | Tech & Business Insights by Queenstar Kyere Gyamfi and used with permission of the author. All other rights reserved by the author.

Development Environments

For this blog, I decided to do some research about development environments. When looking for sources to reference, I came across the article: “Comparison of Development Environments” on Coder’s blog. This blog post goes from simple development environments to more complicated ones.

The article starts off by going into depth about what development environments are. Integrated development environments or IDEs are the center of where developers navigate and edit code. However, the IDE are just one of the components in the development environment. There are the build tools, package managers, system dependencies, and configurations. There are also many development environment architectures as well.

  • Pure local environments: used for single developers or small development teams since it is purely local. Low cost since everything is stored locally
  • Virtual Desktop Infrastructures: the development environment is on a separate remote virtual desktop. Allows better storage across bigger teams and saving in a separate place.
  • Dev Containers: The development environment is packaged into a container . Provides a way to precisely specify the development environment once so everyone can have the same versions and controls on launch every time. I know in class we were able to build our own dev containers and such to match java version and java compiler version.
  • Cloud Development Environment: dedicates, manages, and monitors dev environments.

This blog really helped me to better dive into the deeper areas of development environments. Often when it comes to where code is stored and not just writing it, I get lost and tend to get confused. Seeing the images the article used helped to show what does what and where things are held. Also, as with most things, there are pro and cons to the usages of these environments. An example being Pure local environments being used for single developers or small development teams since it is purely local. This is good because it is low cost but it is hard to work in big development teams since everything is stored locally.

I haven’t had much experience with using different environments and instead have mostly focus on coding itself, but knowing these aspects is crucial when working on projects and software. Knowing to use cloud development environments for big groups and setting up dev containers is very important in proper workflow and to make sure everyone in the team is on the same page. I hope to understand these things pushing forward into my career.

From the blog CS@Worcester – Works for Me by Seth Boudreau and used with permission of the author. All other rights reserved by the author.

Version Control

https://www.geeksforgeeks.org/git/version-control-systems/

In our class, we spent a lot of time exploring Git and the power of version control systems. A Version Control System (VCS) is an essential software tool designed to track and manage changes to source code over time. Its primary function is to maintain a detailed history of a project, allowing developers to record every update, collaborate effectively without overwriting each other’s work, and revert to previous versions if necessary.This article from GeeksforGeeks provides a comprehensive overview of VCS’s in general, explaining what they are, the different types available, and the most popular ones used today.

The article explains how VCS’s come in 3 different forms: Local, Centralized, and Distributed. Local Version Control Systems operate strictly on a single computer, making them suitable only for individual use, though they carry a high risk of data loss if that machine fails. Centralized Version Control Systems solve the collaboration problem by using a single server to store all files and history; however, this creates a single point of failure where server downtime stops all work. Distributed Version Control Systems address this vulnerability by allowing every developer to mirror the entire repository locally. This means that if the server goes down, any client’s repository can be used to restore it, and most operations, such as committing changes, can be done offline before pushing them to a shared server.

Git is a distributed version control system used to track changes in files, especially source code, during software development. This means developers can work offline, make changes, create branches, and experiment without affecting the main project until they are ready to share their updates. Git also provides tools for merging changes from multiple contributors, resolving conflicts, and keeping a clear history of who made each change and why.

Learning Git has been beneficial to me as a new programmer because I can now host, share and update my code in a structured and maintainable manner. Utilizing online platforms that work with Git helps with contributing work to other projects as well as people contributing to mine. I remember in previous classes where we had to work on group coding projects, it was difficult to update and maintain our code as a cohesive unit. We would find ourselves emailing snippets of code back and forth in order to implement new changes. With the knowledge of git and gitlab/github, in future projects I will resort to creating project repositories that can be simultaneously updated and changed while keeping track of all edits and fixes. Also, since these online platforms are widely used and accepted in the programming field, I will have a place to host all personal projects that will build my portfolio for future employers to access. They will be able to see my progress and changes I have made on certain projects so they can see my improvement as a programmer.

From the blog Anna The Dev by Adrianna Frazier and used with permission of the author. All other rights reserved by the author.

A Billion Commits into The Future

https://github.blog/news-insights/octoverse/what-986-million-code-pushes-say-about-the-developer-workflow-in-2025/

For this blog, I chose to read Github’s 2025 Octoverse article, “What 986 Million Code Pushses Say About The Developer Workflow in 2025.” The article analyses nearly a billion commits from developers around the world and highlights how software development teams are adapting their workflows as the entire CS landscape continues to change and evolve. I selected this article because it felt directly related to what we’ve talked about in CS-348, particularly how processes like CI/CD shape development practices and how Agile could potentially relate to some of the points Cassidy Williams, the author, brings up about teamwork.

One of the central ideas is that “iteration is the default state.” Instead of shipping big releases occasionally, developers should instead push small parts constantly. The article explains how smaller and more frequent commits have become normal. Developers fix a bug, build a small feature, or tweak a config, and then push. These smaller, lightweight commits lead to smaller, more focused pull requests with a single purpose. The article also emphasizes that constant shipping reduces risk because smaller changes are easier to debug and rollback.

The article also argues that communication patterns need to catch up with development and workflow changes. Some of the fundamental change that Williams believes need to happen in order for communication to keep up with development include: Standups being shorter or asynchronous entirely, they state that pull requests being blocked is no longer acceptable, and hiring needs to shift towards people who ship the fastest. In William’s look-ahead at the end of the article, they mention how “AI fatigue” is real but, that in the end, the best tools will win out.

Reading this made me think pretty critically about my own habits. I have always waited to commit until I felt a feature was worth committing, until it felt “big enough.” This article helped me realize that the size of a commit is entirely arbitrary and has nothing to do with the importance of a commit. More frequent commits are better for safety and for collaboration. I also realized that I have heavily underused feature flags and often think of tests and an entirely separate act that development when in reality they should be tightly connected and done pretty much constantly. Looking forward, I want to adopt the practices mentioned in this article, lightweight commits, strong integration and deployment, and clear communication can hopefully help to bridge the gap between communication and development.

From the blog CS@Worcester – My Coding Blog by Jared Delaney and used with permission of the author. All other rights reserved by the author.

The Definition of Done (DoD)

How to write a Definition of Done

CS-348, CS@Worcester, Week-4

Source Article: https://www.atlassian.com/agile/project-management/definition-of-done

Recently, in doing a project in class, I had to write a Definition of Done (DoD) file for my portion of the project. I had a basic understanding of what needed to be conveyed, but not exactly how to convey it. I wanted to know more. I looked further into what the industry standard is, and that’s how I came across this post, which I used to help me figure out the one for my project.

I recall from the previous class activity that we looked at the Scrum Guide by Ken Schwaber and Jeff Sutherland, and the guide did give information on what the Definition of Done was and what it is meant to convey, but I wanted something more in-depth.

According to the article, the Definition of Done is a shared set of criteria that tells a Scrum team when a product increment is truly complete. The specifications contained are not created by one person but agreed upon by the entire team because there needs to be a shared understanding of what’s expected at the end of each sprint and for the project overall. This is needed to avoid miscommunication and make sure the team adheres to the pillars of Scrum: transparency, inspection, and adaptation. The developers of the team have the responsibility of parsing out what the DoD will be continuously as it will evolve and change as increments pass.

I knew that whatever was agreed upon needed to be something that is measurable and testable, otherwise there would never be a satisfying way to declare being done. But the article mentioned something about the DoD needing to also be “ready to ship”. This means that there can be no hidden work left after the spring, and there can’t be extra polishing stages.

The most helpful portions of the article were the examples that were provided of what can be included in a DoD. Here are some of them./

  • Increment Passes SonarCube checks with no Critical errors
  • Increment’s Code Coverage stays the same or gets higher
  • Increment meets agreed engineering standards
  • Acceptance Criteria for Increment pass
  • Acceptance Tests for Increment are Automated

Most of the examples include the project passing specific tests or meeting certain standards that would be either required by the client or the organization the team is working for, which is par for the course. The DoD is an important part of Scrum, and I need to understand how to think of a project in order to write one in future cases.

From the blog CS@Worcester – A Beginner's Journey Through Computer Science by Christiana Serwaah and used with permission of the author. All other rights reserved by the author.

What is Linting

A quick overview of linters

Source: https://www.perforce.com/blog/qac/what-is-linting

Recently, in class, we did an activity on creating a lint script. The activity honestly confused me a little bit out of the many questions I had about it. One of the questions I wanted to explore is more about what linters do and why we use them. The activity gives a brief description of what linters are and their purpose.

According to the activity, Linters are tools that check the formatting and style of code and files in projects. Some extensions, like the one used earlier in the activity, markdownlint, can perform some of this checking, but not all tools are available as extensions. I would like to understand them a bit more, so I chose a source that went into extensive detail about what linting is and linters.

It scans the code for things that don’t necessarily prevent the code from running but can cause bigger issues later on, such as small bugs, conflicting formatting, and bad style choices. It can also look for common errors like indexing beyond arrays, dereferencing null pointers, unreachable code, and non-portable constructs.

It’s better suited for programming languages like Python and JavaScript because they are interpreted languages that don’t have a compiling phase, so linting helps with keeping up consistency. Linting is more effective in code that follows standard rules and is in projects that need to adhere to shared style guidelines.

Linters are a basic form of static analysis tool, which are any tools that can analyze code without running it. More advanced tools are able to detect:

  • Deeper data-flow problems
  • Runtime risks
  • Security vulnerabilities
  • Complex rule violations
  • Defects across multiple files or modules

Linters are a very helpful tool but they do have some limitation that needed to be accounted for when wanting to use it on a project.

Pros

  • Catch small issues early
  • Improve code consistency
  • Reduce time spent on reviews
  • Support teamwork and shared standards
  • Great for beginners who need guidance
  • Fit well into Agile workflows

Cons

  • Can produce many warnings
  • Sometimes flags harmless code
  • Cannot detect deep logic problems
  • Needs to be configured correctly
  • Can slow you down

Overall, I learned that Linters are one of many analysis tools that can be used in a program, and I also learned one of the ways methodologies like Scrum are able to keep transparency and deal with continuity and consistency issues when dealing with a larger team.

I would like to become more familiar with creating lint scripts, so I can integrate them more into my programs, especially since consistency is something I have issues with, the more I learn about how to code better.

From the blog CS@Worcester – A Beginner's Journey Through Computer Science by Christiana Serwaah and used with permission of the author. All other rights reserved by the author.

Blog post Quarter 4

For this quarters self-directed professional development blog, I chose to watch the YouTube video “Clean Code: Learn to write clean, maintainable and robust code.” It is an older video but I wanted a resource that explained Clean Code principles in a way that connects directly to real programming habits, especially since we’ve been focusing on refactoring and reducing complexity in CS-348. Instead of reading another article, I thought watching a different creator explain the ideas visually would help reinforce the concepts from class in a new way.

This video introduces Clean Code by framing it as both a technical and professional skill. The speaker explains that messy code slows down teams, increases bugs, and creates long-term maintenance problems, while clean code allows developers to move faster and collaborate more effectively. He breaks down major Clean Code principles, including meaningful naming, small functions, consistent formatting, avoiding duplication, reducing side effects, and writing code that communicates its intent clearly without relying on extra comments. He also emphasizes the value of refactoring regularly instead of waiting until the system becomes too large to comfortably improve.

I chose this video because it feels practical and grounded. This video also focuses on habits that any developer can adopt, whether they’re building side projects or working on large software teams. The examples were simple but effective, especially when he showed how shortening functions, renaming variables, or removing unnecessary logic instantly improved readability.

What stood out to me the most was his point that code should be written for humans first and machines second. This video made it click for me that good code is for humans first and computers second because he showed how unclear naming or tightly coupled functions force the reader to do mental gymnastics. When I look back at my older project, I can see exactly where I created those kinds of problems. This video made me more aware of how quickly messy patterns can spread if they aren’t addressed early.

This resource affected how I approach our CS-348 refactoring assignment. Instead of waiting until the end to fix everything, I’m improving readability each time I revisit a section. The video explained, clean code is about developing consistent habits that make software easier to understand, maintain, and extend.

Moving forward, I expect these practices to influence my future professional development. Whether I’m collaborating on a team or working independently.

From the blog CS@Worcester – Tristan CS by Tristan Coomey and used with permission of the author. All other rights reserved by the author.

Blog 4 – Software License

In our world, when you want to drive a car outside the public, you would need a driving license to do so. This is because you need the permission to use the road freely. Also, if you want to go fishing, you also need a fishing license, this gives you the permission to fish at any public lake or public fishing area. As we know, license provides you the authorization to access something that requires from someone’s permission. Same with software engineering, in software, we require license to access someone’s code or project. It grants right for you without transferring ownership from the author. However, there are multiple types of software licenses, each has their own function. I will explain everything in this blog based on what I read from https://finquery.com/blog/software-licenses-explained-examples-management/

What is a software license?

A software license is a legal agreement between the software creator/provider and the user (individual or organization) that defines how the software may be used. It doesn’t transfer ownership — it grants rights.

The license specifies things like how many devices you can install on, whether distribution or modification is allowed, and other restrictions (or freedoms) depending on the license type.

How licenses work & why they matter?

When you obtain software, what you actually pay for is the right to use that software under certain terms, not the software itself.

Licenses protect the intellectual property rights of developers, control distribution, and can also allow or forbid modification. That helps prevent piracy and misuse.

Before using software, users often must agree to an End User License Agreement (EULA) — a legally binding contract defining permitted and prohibited actions under that license.

Common Types of Software Licenses

  • Public domain license: No restrictions — software is effectively “free for all.” Anyone can use, modify, relicense, or commercially exploit it without needing to pay or give credit.
  • GNU Lesser General Public License (LGPL): Open-source license allowing developers to use/license the code in their own projects without having to release their own source code.
  • Permissive license: A flexible open-source license with minimal restrictions — typically just requiring attribution/copyright notice on redistribution. Common in open-source community
  • Copyleft license: More restrictive open-source license: if you modify and distribute the software, you must also make your version’s source code freely available under the same license
  • Proprietary license: Most restrictive — software remains proprietary, users get limited rights; typically no access to source code, and strict limits on copying, sharing, modification, or redistribution.

Software License Management

Because organizations often use many pieces of software (some licensed, some subscription-based), managing this properly is important. The article recommends:

  • Keeping a centralized inventory of all software licenses and subscriptions used within the organization.
  • Tracking license usage, renewals, and compliance — ideally via a dedicated license-management tool rather than manual spreadsheets.
  • Establishing reminders for renewals and conducting regular audits to avoid under- or over-licensing, penalties, or wasted spending

Conclusion

If you ever find an interested code, or project, that you want to install it in your computer or use it as references, make sure to keep an eye on its license. Identify its type, and follow the permission it allows. This keeps you avoid any legal risks, control costs, and efficiently manage software assets.

From the blog CS@Worcester – Nguyen Technique by Nguyen Vuong and used with permission of the author. All other rights reserved by the author.

Choosing Code Rules: Navigating Licensing for Health Technology

Understanding software licensing is one of the crucial, non-coding topics that becomes a massive deal the moment you start building real applications. When covering this in my Software Process Management course, I was overwhelmed by the density and intimidation of the numerous license types and legal details. For this blog post, I decided to use this opportunity to strengthen my understanding of these various software licenses. This information is vital for my plans, especially when creating high-integrity therapeutic platforms where security and protecting ideas are non-negotiable. To achieve this goal, I looked at several resources to deepen my understanding.

To make sense of licensing, the best first step is grouping them into two major categories: Proprietary (Closed source) and Open Source. As the resource by Zluri, “3 Major Types of Software Licenses & Its Categories” states, proprietary licenses are the most restrictive type as their whole purpose is to keep the source/base code private. This means, you can only use the software via permission and under strict conditions; you will not be able to edit or distribute the code yourself. For my work, this proprietary model might be necessary to protect the specific clinical methodology or algorithms I develop for future therapeutic platforms. Open source licenses, on the other hand, make the source code publicly available, which is the best for those who are looking for collaboration and efficiency.

However, this resource by BlackDuck states, open source licenses can become overwhelming. To best understand, I grouped the following licenses from easiest to more difficult to handle. Permissive Licenses (i.e., MIT and Apache 2.0) are the easiest to use as they are very flexible and only require you to include the original copyright notice. Copyleft licenses (i.e., GPL) have a strict rule to follow. That is, if you distribute a modified version of software, you must also make your modified source code available under the same copyleft license. If I used a copyleft feature in a therapeutic game, for example, I would be forced to release the source code of my entire game; which is a major factor protecting the core methodology and design. 

Licensing is more about just reading and considering rules, strategic planning is very important with choosing which to work with. It directly impacts ethics, decisions, and compliance of any platform handling sensitive information. If I am to build things like interactive feedback systems or symptom management trackers, I need to know exactly which third-party tools I can safely use. My current task is figuring out that delicate balance to ensure I build professional, compliant, and sustainable software that protects both users and the core therapeutic intervention.  

Main Resources:
3 Major Types of Software Licenses & Its Categorieshttps://www.zluri.com/blog/types-of-software-licenses

Five types of software licenses you need to understandhttps://www.blackduck.com/blog/5-types-of-software-licenses-you-need-to-understand.html

Additional Resources:
Software Licensing Models: Your Complete Guidehttps://www.revenera.com/blog/software-monetization/software-licensing-models-types/

Software License Types, Examples, Management, & More Explainedhttps://finquery.com/blog/software-licenses-explained-examples-management/

From the blog CS@Worcester – Vision Create Innovate by Elizabeth Baker and used with permission of the author. All other rights reserved by the author.