Category Archives: Week 12

Agile vs Waterfall

Week 12 – 11/30/2024

Agile Versus Waterfall: Picking the Right Tool for the Job

As a senior in college, I’m starting to think more seriously about what it will actually be like to work on projects in the real world. I recently read a blog post that compared and contrasted Agile and Waterfall project management methodologies, and it really helped me understand the importance of choosing the right approach for different types of projects. This aligns perfectly with what we’ve been discussing in my class about the need for both strategic planning and adaptability.

The blog post “Agile vs. Waterfall: Understanding the Differences” by Mike Sweeney explained that Waterfall is a very linear, sequential approach, where each phase of the project must be completed before moving on to the next. It’s kind of like building a house – you need to lay the foundation before you can put up the walls. This makes Waterfall a good choice for projects where the requirements are well-defined and unlikely to change, like in construction or manufacturing. In these industries, making changes mid-project can be super costly and impractical, so having a clear plan from the outset is essential.

Agile, on the other hand, is all about flexibility and iteration. The project is broken down into short cycles called sprints, and the team continuously reevaluates priorities and adjusts its approach based on feedback and new information. This makes Agile a great fit for projects where the requirements are likely to evolve over time, such as software development. In software development, client needs and market trends can change rapidly, so being able to adapt is crucial.

One of the biggest takeaways for me was the realization that choosing the right methodology is crucial for project success. I used to think that being flexible was always the best approach, but now I understand that structure and predictability can be equally important in certain situations. The key is to carefully assess the project requirements and choose the methodology that best aligns with those needs.

As I prepare to enter the professional workspace, I feel much more confident in my ability to approach projects strategically. Thanks to this blog post, I now have a better understanding of when to use Agile versus Waterfall. For instance, if I’m working on a software project that involves a lot of client interaction, I’d probably lean towards Agile. But if I’m managing a marketing campaign with well-defined objectives, Waterfall might be a more appropriate choice.

The real-world examples provided in the blog post were super helpful in illustrating how these methodologies are applied in different industries. This practical insight will definitely be valuable as I transition from the academic world to the professional world.

Blog link: https://clearcode.cc/blog/agile-vs-waterfall-method/

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Why Git

Why is it always git

Photo by RealToughCandy.com on Pexels.com

The version control system that every programmer uses. Even in my computer science class, we had lectures dedicated to Git, the commands in Git, how Git is used, what Git is used for, and just so much Git. The funny thing, is there are other version control systems such as Mercurial, but they aren’t ever brought up they are there but feel overshadowed by git. So the question I am asking now is why Git. So I did some investigating.

The question: what does Git do that is so special compared to other version control systems? Now version control systems can do all sorts of things such as allowing developers to see what has been changed, enable collaborative work, and branch and merge changes to a repo. If multiple can do this, then what does git do differently? An article from Geeks For Geeks lists several. Git can be worked on offline and is resilient because multiple developers can have copies of the repo, and any local repo can be used to restore a project. It also comes with conflict resolution that’s allows one to handle merge conflicts by providing tools to solve those problems. So what about the other systems. Well, GFG got that covered. Here are some comparisons.

Subversion

Compared to Git, the architecture is centralized, one single central repo

Fewer branching and merging options

Better performance

Mercurial

Smaller community compared to Git

Not as much flexibility as Git

Perforce

Can handle very large code base

Not as flexible as Git in terms of merging

Git is Open Source and Free, while Perforce isn’t

That is a decent amount of reasons to use Git over other VCS. I think the community part is important for such a popular system, because if you aren’t too familiar with the commands that come with Git, then you have a lot of people that can help. There are a lot of forums and articles about Git tools out there if you ever need it.

I also feel that the collaborative aspect of Git is, very helpful. A lot of projects have a lot of people working on them, so having something like Git that can handle it and make the task easier is great. Also, the fact that it is accessible helps with that too.

Git being so popular makes a lot of sense now, accessibility, community, and collaboration are what a lot of developers require, and I have to say Git provides that well.

GeeksforGeeks. (2024, September 19). Git vs. other version control systems: Why Git stands out? https://www.geeksforgeeks.org/git-vs-other-version-control-systems-why-git-stands-out/

From the blog Debug Duck by debugducker and used with permission of the author. All other rights reserved by the author.

Best Practices for REST API Design.

REST APIs are essential of todays software development, this helps applications to communicate across platforms. I read the blog post Best Practices for REST API Design on the Stack Overflow blog, which provided advice on designing APIs that are efficient, secure, and user-friendly. This post furthered my understanding of “RESTful” principles.

Summary

The blog post outlines several key principles for designing REST APIs effectively. It begins by stressing the importance of a well-defined URL structure that reflects the resource hierarchy and uses nouns instead of verbs. For instance, /users/123/posts is a clear and intuitive way to access a user’s posts.

It also highlights the necessity of using standard HTTP methods (GET, POST, PUT, DELETE) to maintain consistency, along with proper status codes to provide meaningful feedback to clients. The post delves into techniques for handling query parameters, versioning APIs to ensure backward compatibility, and implementing pagination for large datasets. Security and performance are emphasized as critical considerations, with recommendations to use HTTPS and apply caching strategies.

Why I Chose This Resource

As we did POGILs in class, I frequently ask myself scenarios where creating a robust REST API are most essential. This blog post stood out because it bridges theory and practice, directly applicable to homeworks and future projects.

Reflection

The blog post reinforced several concepts I’ve encountered, such as the importance of clear URL structures and consistent use of HTTP methods. However, it also introduced new ideas that I hadn’t fully understood as much, such as the role of API versioning in preventing disruptions for existing users when introducing updates.

One particularly impactful takeaway was the emphasis on client feedback through proper HTTP status codes. In my past in class activities, I’ve realized that it is imporetnat for an API to clearly communicate success or failure states, yet I hadn’t prioritized this aspect. The value of using codes like 201 Created for successful resource creation or 400 Bad Request for errors, enhancing user experience.

Future Work

Going forward, I plan to apply these best practices to my API design tasks. For instance, in my upcoming assignments or personal projects, I will ensure that URL structures are logical and intuitive, aligning them with its resourced relationship. Additionally, I’ll pay closer attention to implementing proper status codes and securing APIs with HTTPS to protect sensitive data, such as ID or SSNs…

This resource has also inspired me to explore tools like Postman or Swagger for testing and documenting APIs.

Conclusion

The blog post Best Practices for REST API Design not only refreshed my technical knowledge but also provided ideas for creating APIs that are robust, secure, and user-friendly.

https://stackoverflow.blog/2020/03/02/best-practices-for-rest-api-design/

From the blog CS@Worcester – function & form by Nathan Bui and used with permission of the author. All other rights reserved by the author.

(Week-12) Copyright and Licensing in Software Development

A lot of developers today mistakenly think that publishing code without a license automatically makes it free for public use. However, under copyright law, code without an explicit license is still technically the property of the original owner. That means that any users cannot legally copy, modify, or distribute it without permission. This can lead to legal disputes, misuse of the code, or anything the original owner did not intend. Posting code on GitHub without a license is defaulted to GitHub’s terms, which allows for limited collaboration but restricts use at larger scale.

When it comes to software projects, making sure you have the right license that fits the needs of the project is extremely important to how others interact with your code.  Typically licenses of this nature fall under two categories, a copyleft license or a permissive license.  A copyleft license, such as the GNU General Public License (GPL), ensures that any derivative works must retain the same license.  These licenses promote transparency and guarantee that the software and its derivatives remain open-source. This is for developers who want to foster a community, make improvements from others testing, and prevent proprietary derivatives.  However, some large companies may avoid this technique, as they could be giving a competitive advantage to other competitors interested in their work.  On the other hand, permissive licenses, such as the MIT or Apache 2.0 licenses, allow derivatives to be licensed under proprietary terms. This approach provides as much “freedom” as possible, while making it easier for businesses to use the code without copyright issues. While this can lead to a large increase in usage, it could get out of hand when companies don’t submit their versions of the code.

“How Copyright Works (Part 5): Copyright Licenses in Simple Terms”, is a short video by the Youtube channel What is Law Even.  It is the last installment of a 5-part series that explains the many rigid aspects of copyright law, and how it is used in the context of today.  This installment focuses on the basics of copyright law, and specifically the license aspect of the law.  The narrator explains what a copyright license is, and the possible royalties that may be attached to that respective license.  It is a great straightforward video that is short, but filled with quick bits of knowledge important in the copyright sphere.

In short terms, copyright licensing is extremely important when it comes to software development.  Licenses are a key part of a project’s workflow, and even when they aren’t explicitly stated, there is a good chance there is still one in the background.  Always be weary of the copyright laws as they are only there to protect owners from unlawful practices against their projects.

Link: https://www.youtube.com/watch?v=XjwOTNBRFpE

  • Elliot Benoit

From the blog CS@Worcester – Elliot Benoit's Blog by Elliot Benoit and used with permission of the author. All other rights reserved by the author.

Microsoft’s Git Problem

This week, I came across an article that talks about Microsoft encountering an issue with Git and version control. Microsoft software engineers use Git as their version control system, and they recently discovered a problem with the amount of memory being used in their repository. Their first commit was only 2 GB, then grew to 4. However, it soon was taking up 178 GB of memory. Obviously, this is a bit of an issue. It would be very difficult for a team to make commits with that much storage space in that amount of time. But how does this happen? Well, the article states that the issue stems from name-hash collisions. Basically, Git was finding a bunch of differences with each commit despite there not being many at all. Obviously, this is a huge issue, so steps are already being made to resolve it.

This is an issue that was particularly interesting to me because we are currently learning about Git and how to properly use it in my CS 348 class. Git is an incredibly useful and necessary tool in the computer science world. To see an issue this scale at a company this important grabbed my attention immediately. I was also curious to see how Git was used on large projects at a company like Microsoft.

Seeing that problems like this can occur was for sure interesting. However, what was more interesting to me was that by the time anyone could report on this, changes were already being made. The article does state that this issue would not affect smaller repos, which would be what I am dealing with at the moment. However, it is fascinating to see how on top of everything these developers are in order to assure a smooth running product. It makes me feel better knowing that these issues will be resolved by the time I could ever encounter them. It also motivates me knowing that some day, I will have to be this vigilant in my work to truly be successful. Also, the fact that over 90% of developers use Git for version control was intriguing. I knew it was popular, but I also was aware of other version control systems. This just goes to show how reliable Git is, despite some occasional issues such as this one. I am glad to be learning how to use this system, and I am excited to learn more about it going forward.

https://www.techzine.eu/news/devops/125694/flaw-in-git-bloated-microsoft-repository-by-a-factor-of-35/

From the blog CS@Worcester – Auger CS by Joseph Auger and used with permission of the author. All other rights reserved by the author.

What is Scrum?

Understanding Scrum: A Framework for Team Success

Scrum, a popular framework for managing complex projects, has transformed the way teams collaborate to deliver value. I explored this in-depth explanation through an article I found on Scrum.org. This provided a clear overview of the principles, offering how Scrum facilitates productivity and adaptability in many different environments.

A Quick Summary

The article breaks down Scrum as a “framework” designed to help teams address complex problems while delivering high-value results. Originating from Agile principles, Scrum relies on time-boxed iterations called Sprints, typically lasting two to four weeks, where teams focus on completing a specific set of tasks. Key roles in Scrum include the Product Owner, who prioritizes work; the Scrum Master, who ensures the process runs smoothly; and the Development Team, which executes the tasks.

Scrum also emphasizes artifacts like the Product Backlog and Sprint Backlog, which guide work priorities and task distribution. Regular ceremonies such as Sprint Planning, Daily Standups, Sprint Reviews, and Retrospectives foster transparency and continuous improvement.

Why I Chose This Resource

We learned this in class and I think it is a good refresher to read. In the world of of software development, I often think of projects where collaboration and adaptability are required. Scrum is a cornerstone of Agile development, making it a vital topic to understand for both academic projects and industry roles. This article stood out for its clarity, structured explanation, and relevance to course concepts, particularly software design and project management.

Reflections

The article reinforced the value of frameworks in managing stressful development cycles. I was interested by the emphasis on teamwork and iterative progress. While individual accountability is important, Scrum places significant focus on collaboration, encouraging team members to engage in problem-solving and decision-making collectively.

A key takeaway for me was the importance of transparency, which Scrum achieves through its artifacts and ceremonies. For example, the Daily Standup ensures that every team member is aligned, minimizing the risk of miscommunication, something I can apply to my group projects in the future.

What I plan to use

In future group projects, I aim to incorporate elements of Scrum, such as structured meetings and task prioritization using a backlog. For example, if I do have a software development project, I plan to propose implementing a Sprint-like system where my team can review progress and adjust objectives weekly. Long term, understanding Scrum positions me better for internships and career roles where Agile methodologies are prevalent.

This article not only described what Scrum is but for me also underscored its practical applications, affirming its relevance to professional sides.

From the blog CS@Worcester – function & form by Nathan Bui and used with permission of the author. All other rights reserved by the author.

REST API Design

Modern web applications are built on top of REST APIs, which provide the essential connection between the client and the server. This week, I discovered a blog called “Best Practices for REST API Design” by John Au-Yeung and Ryan Donovan which outlines important techniques for developing secure, performant, and user-friendly APIs. Because JSON is widely accepted and lightweight, it promotes its use as the standard data format. While HTTP methods like GET, POST, PUT, and DELETE determine the action, logical endpoint architectures that rely on nouns rather than verbs are crucial for clarity. Although it improves readability, resources should be kept simple to prevent complexity.

The blog discusses how to handle problems politely by giving easily comprehensible error messages to facilitate debugging and employing relevant HTTP status codes (e.g., 400 for Bad Request, 404 for Not Found). It also highlights the need of using query parameters for pagination, sorting, and filtering when handling big datasets. For protecting APIs, security measures including role-based access control, SSL/TLS encryption, and the least privilege principle are essential. Although caching is emphasized as a way to improve performance, developers should make sure it doesn’t produce stale data. Lastly, it is advised to version APIs, frequently using prefixes like /v1/, in order to guarantee backward compatibility and permit incremental enhancements.

Since we’ve been learning about REST API design in class, I chose to read a blog about it in order to gain a deeper understanding of best practices. In addition to explaining each essential aspect of REST API design, such as JSON usage, appropriate endpoint naming, error handling, security, caching, and versioning, I selected this blog because it also provides code blocks as examples, which helped readers understand and visualize the concepts more clearly.

What caught my attention the most was the part about using logical nesting for endpoints. It described how APIs are made easier to use and comprehend by grouping relevant endpoints. It also made the point that endpoints shouldn’t replicate the database’s structure. This increases the security of the API by shielding private data from attackers. I became more aware of how endpoint design may affect security and usability after reading this. This demonstrated the significance of properly planning endpoint architectures.

This article impacted my perspective on API design by emphasizing the necessity of striking a balance between usability and simplicity. I want to use these ideas in future projects by making solid security procedures, efficient error handling, and well-defined endpoint structures top priority. By using the strategies covered in this blog, I intend to create APIs that are effective and simple for developers to use, guaranteeing that they can be maintained and offer a satisfying user experience throughout time.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

GRASP

Similar to SOLID, GRASP is an acronym for a set of design principles used in object-oriented programming to clearly define responsibilities within a software system. GRASP stands for General Responsibility Assignment Software Patterns, and focuses on 9 different patterns or principles. A blog on kamilgrzybek.com called “Grasp – General Responsibility Assignment Software Patterns Explained” does a really good job at explaining all nine of the different patterns, as well as gives helpful examples of each. According to the article, “GRASP is a set of exactly 9 General Responsibility Assignment Software Patterns. As I wrote above assignment of object responsibilities is one of the key skill of OOD. Every programmer and designer should be familiar with these patterns and what is more important – know how to apply them in every day work (by the way – the same assumptions should apply to SOLID principles.” This quote shows that the GRASP principles are just as important to know as the SOLID principles. I think that everyone should learn the GRASP principles alongside the SOLID principles, and I plan on trying to use them in the future.

The nine principles consist of the following:

Controller
Creator
High Cohesion
Indirection
Information Expert
Low Coupling
Polymorphism
Protected Variations
Pure Fabrication

Controller tends to represent the device that the software is running within, such as the overall system. It also represents a use case scenario within the system operation occurs. Controller depends on high level design of the system being used, but generally we need to define the object which orchestrates the business transaction before it is processed.

Creator is a hard principle to explain. I think the example that the article gave is helpful. “Problem: Who creates object A? Solution: Assign class B the responsibility to create object A if one of these is true (more is better): B contains or compositely aggregates A, B records A, B closely uses A, B has the initializing data for A.”

High cohesion is how well the elements in a class work together. If a class has low cohesion, then it has a lot of unrelated data or behaviors inside of it. Classes with high cohesion, meaning it has data and behaviors that are all related to each other, are a lot more efficient.

Indirection avoids direct coupling between two or more things and allows an intermediate object to mediate other components.

Information expert assigns a class the responsibility of holding all of the information needed to fulfill the object and its needs.

Low coupling is when objects are more independent and isolated, rather than being completely dependent on other parts of the system. This helps reduce the risk of breaking the system.

Polymorphism is an essential principle of OOD. It allows different types of objects to be treated as a single type.

Protected variations identify points of predicted instability and assign responsibilities in order to create a stable interface around them.

Pure fabrication assigns a highly cohesive set of responsibilities to an artificial class that doesn’t represent a problem domain concept when you don’t want to violate high cohesion and low coupling.

Link: https://www.kamilgrzybek.com/blog/posts/grasp-explained

From the blog CS@Worcester – One pixel at a time by gizmo10203 and used with permission of the author. All other rights reserved by the author.

REST API Understood

Throughout the year we have used REST API in class more and more. Through the microservices kit activities I found more interest in the use of REST API. I sought out more information on them to expand my knowledge through an article on REST APIs. The article I found is REST APIs: How They Work and What You Need to Know by Jamie Juviler

The article begins by explaining the importance of REST APIs. They are a widely used framework that allows applications to communicate over the web by following a set of design principles. The article explains it is used to enable data sharing and integrations between systems, allowing software to interact with external services, improving functionality and user experience. The article continues by explaining how they work. By allowing clients to make requests to a server for resources, the server then responds with the current state of the requested resource, usually in a standard format like JSON. The API can also allow clients to modify or add resources on the server. The article explains there are six key principles to know:

  1. Client-Server Separation: REST APIs separate the client and server, meaning that the client can interact with the server without needing to know how the server works internally.
  2. Uniform Interface: RESTful APIs require a standardized way of interacting, usually through these HTTP methods: GET: Retrieve data. POST: Create new data. DELETE: Delete data. PUT: Update existing data.
  3. Stateless: Each request to the server is independent and must contain all the information necessary for the server to fulfill the request.
  4. Layered System: In a REST API, there can be intermediary layers (e.g., security layers, load balancers, or caching layers) between the client and the server. These layers should not impact the client-server communication and should be transparent to the client.
  5. Cacheable: Responses from the server may be cacheable, meaning that the client can store the data locally and avoid repeated requests for the same resource.
  6. Code on Demand (Optional): This optional feature allows servers to send executable code (like JavaScript) to the client, enabling the client to execute this code locally.

The article then explains the benefits of REST APIs, which include flexibility, scalability, interoperability, simplicity, and cost-effectiveness. Juviler then explains how to set up REST APIs with an admin and the article concludes with some examples of the use of REST APIs. 

Rest API has become something very important in class and because of recent assignments I wanted to learn more. Through this blog I understand REST more thoroughly, and the reasons for using it. I understand its feasibility and use for microservices back-end architecture as seen in Thea’s Pantry.

Source:

https://blog.hubspot.com/website/what-is-rest-api

From the blog CS@Worcester – WSU CS Blog: Ben Gelineau by Ben Gelineau and used with permission of the author. All other rights reserved by the author.

Docker: Diving In

After understanding the team aspects of software development, I wanted to shift focus to the tools at a team’s disposal. In conjunction with version control software, development environments become among the most important tools at a tem’s disposal. In projects outside of class I am working closely with docker, and we briefly utilized it to prepare for CS-448 capstone. I wanted to spend more time learning about these development environments. To do so I found an article on the basics of understanding Docker. The article’s title is In-depth Docker by Walter Code

This blog post begins by outlining what it contains. The article expands on configuration management, Docker’s technical components, and Dockerfile commands. Walter explains that configuration management builds on Docker’s layered image management, which addresses issues with traditional “golden image” models that often lead to unmanageable images. Docker’s approach allows for faster iteration and better flexibility. The article continues to highlight some key technical components such as isolation of filesystems, processes, and networks, and resource management via cgroups. Docker is extremely flexible and can run on any x64 host with a modern Linux kernel, and is compatible with OS X and Windows via virtual machines. The post also introduces Docker’s user interfaces, such as Shipyard, Portainer, and Kitematic, which help with container management. Walter also touches on essential Dockerfile commands, like ADD, CMD, and RUN, which are covered to help users configure containers. The article wraps up by discussing Docker’s filesystem layers that allow changes to be applied to a writable layer while keeping underlying read-only layers intact. Additionally, Docker Compose enables users to manage multi-container application stacks, while Docker Swarm helps scale workloads across container clusters. The post finishes by highlighting Docker’s simplicity, security, and rapid deployment, making it a compelling tool for developers, with more practical examples to follow in future posts.

This article helped me better understand how Docker can be essential for teams. It also provided me with more knowledge to aid me in my personal projects that utilize docker. This is a key step for me in understanding how best to work with a team with these tools. What I found most helpful was an explanation of how Docker’s configuration management could be used, and why docker compose is so useful. Next in my blog I will cover more strategies or tools to help me better function in a team for software development. 

Source:https://waltercode.medium.com/in-depth-docker-faa0c4dd9a63

From the blog CS@Worcester – WSU CS Blog: Ben Gelineau by Ben Gelineau and used with permission of the author. All other rights reserved by the author.