Category Archives: CS@Worcester

Waterfall 2.0

 

When looking
around the internet, I found that there was waterfall 2.0 version of the
waterfall software development methodology that we learned at start of class. After
I saw the article, I found out that even the most established practices can
transform to meet the demands of new era. Therefore, I chose this article as it
provides basic summarization of waterfall 2.0 and the difference between
traditional waterfall model. As we talked about why waterfall method was being
replaced by agile and scrum, I thought there will be some people who will try
to improve and use the old working method so using this article showed me how
waterfall transformed to adopt need of current workflow. Therefore, I decided
to look into it as a background for finding out about the improved version as
it revives the dying method into a new form.

             This
article uses baking as an example to illustrate the principles of waterfall 2.0.
It describes the steps on planning, mixing, baking, decorating and enjoying trying
to explain how they work. For example, for planning, allowing adjustments to the
cake showing adaptability, mixing; showing collaboration with others, baking;
showing continuous monitoring, making sure the cake does not burn or become too
dry etc. This shows how the 2.0 fixed many problems of waterfall. In the end,
these changes try to solve the problem of traditional waterfall model that was
not able to adopt to constantly changing environment and unexpected changes to
the requirements and need of the program.

             Reading
this article reminded me why the waterfall model is being replaced by agile or
scrum, not able to meet the industry’s demand in changing environment. Using
cake to explain the new waterfall made the concept more understandable and memorable.
The example clearly showed what the 2.0 version tries to do in order to survive.
The concept of Waterfall 2.0 was particularly impactful because it demonstrated
that no methodology is entirely outdated. If they have the right adjustments,
even traditional approaches can remain relevant in the changing world. In the
future, I thought I might use waterfall methodology in a short and brief project
that what I had to do was very clear. However, finding out waterfall 2.0 I can
try to use it for future project. For example, it could be useful in projects
with fixed deadlines and regulatory requirements, where a hybrid approach
ensures adaptability and flexibility.

Link https://de.celoxis.com/article/waterfall-is-dead-long-live-waterfall-2-0

 

From the blog Sung Jin's CS Devlopemnt Blog by Unknown and used with permission of the author. All other rights reserved by the author.

Waterfall 2.0

 

When looking
around the internet, I found that there was waterfall 2.0 version of the
waterfall software development methodology that we learned at start of class. After
I saw the article, I found out that even the most established practices can
transform to meet the demands of new era. Therefore, I chose this article as it
provides basic summarization of waterfall 2.0 and the difference between
traditional waterfall model. As we talked about why waterfall method was being
replaced by agile and scrum, I thought there will be some people who will try
to improve and use the old working method so using this article showed me how
waterfall transformed to adopt need of current workflow. Therefore, I decided
to look into it as a background for finding out about the improved version as
it revives the dying method into a new form.

             This
article uses baking as an example to illustrate the principles of waterfall 2.0.
It describes the steps on planning, mixing, baking, decorating and enjoying trying
to explain how they work. For example, for planning, allowing adjustments to the
cake showing adaptability, mixing; showing collaboration with others, baking;
showing continuous monitoring, making sure the cake does not burn or become too
dry etc. This shows how the 2.0 fixed many problems of waterfall. In the end,
these changes try to solve the problem of traditional waterfall model that was
not able to adopt to constantly changing environment and unexpected changes to
the requirements and need of the program.

             Reading
this article reminded me why the waterfall model is being replaced by agile or
scrum, not able to meet the industry’s demand in changing environment. Using
cake to explain the new waterfall made the concept more understandable and memorable.
The example clearly showed what the 2.0 version tries to do in order to survive.
The concept of Waterfall 2.0 was particularly impactful because it demonstrated
that no methodology is entirely outdated. If they have the right adjustments,
even traditional approaches can remain relevant in the changing world. In the
future, I thought I might use waterfall methodology in a short and brief project
that what I had to do was very clear. However, finding out waterfall 2.0 I can
try to use it for future project. For example, it could be useful in projects
with fixed deadlines and regulatory requirements, where a hybrid approach
ensures adaptability and flexibility.

Link https://de.celoxis.com/article/waterfall-is-dead-long-live-waterfall-2-0

 

From the blog Sung Jin's CS Devlopemnt Blog by Unknown and used with permission of the author. All other rights reserved by the author.

Waterfall 2.0

 

When looking
around the internet, I found that there was waterfall 2.0 version of the
waterfall software development methodology that we learned at start of class. After
I saw the article, I found out that even the most established practices can
transform to meet the demands of new era. Therefore, I chose this article as it
provides basic summarization of waterfall 2.0 and the difference between
traditional waterfall model. As we talked about why waterfall method was being
replaced by agile and scrum, I thought there will be some people who will try
to improve and use the old working method so using this article showed me how
waterfall transformed to adopt need of current workflow. Therefore, I decided
to look into it as a background for finding out about the improved version as
it revives the dying method into a new form.

             This
article uses baking as an example to illustrate the principles of waterfall 2.0.
It describes the steps on planning, mixing, baking, decorating and enjoying trying
to explain how they work. For example, for planning, allowing adjustments to the
cake showing adaptability, mixing; showing collaboration with others, baking;
showing continuous monitoring, making sure the cake does not burn or become too
dry etc. This shows how the 2.0 fixed many problems of waterfall. In the end,
these changes try to solve the problem of traditional waterfall model that was
not able to adopt to constantly changing environment and unexpected changes to
the requirements and need of the program.

             Reading
this article reminded me why the waterfall model is being replaced by agile or
scrum, not able to meet the industry’s demand in changing environment. Using
cake to explain the new waterfall made the concept more understandable and memorable.
The example clearly showed what the 2.0 version tries to do in order to survive.
The concept of Waterfall 2.0 was particularly impactful because it demonstrated
that no methodology is entirely outdated. If they have the right adjustments,
even traditional approaches can remain relevant in the changing world. In the
future, I thought I might use waterfall methodology in a short and brief project
that what I had to do was very clear. However, finding out waterfall 2.0 I can
try to use it for future project. For example, it could be useful in projects
with fixed deadlines and regulatory requirements, where a hybrid approach
ensures adaptability and flexibility.

Link https://de.celoxis.com/article/waterfall-is-dead-long-live-waterfall-2-0

 

From the blog Sung Jin's CS Devlopemnt Blog by Unknown and used with permission of the author. All other rights reserved by the author.

Waterfall 2.0

 

When looking
around the internet, I found that there was waterfall 2.0 version of the
waterfall software development methodology that we learned at start of class. After
I saw the article, I found out that even the most established practices can
transform to meet the demands of new era. Therefore, I chose this article as it
provides basic summarization of waterfall 2.0 and the difference between
traditional waterfall model. As we talked about why waterfall method was being
replaced by agile and scrum, I thought there will be some people who will try
to improve and use the old working method so using this article showed me how
waterfall transformed to adopt need of current workflow. Therefore, I decided
to look into it as a background for finding out about the improved version as
it revives the dying method into a new form.

             This
article uses baking as an example to illustrate the principles of waterfall 2.0.
It describes the steps on planning, mixing, baking, decorating and enjoying trying
to explain how they work. For example, for planning, allowing adjustments to the
cake showing adaptability, mixing; showing collaboration with others, baking;
showing continuous monitoring, making sure the cake does not burn or become too
dry etc. This shows how the 2.0 fixed many problems of waterfall. In the end,
these changes try to solve the problem of traditional waterfall model that was
not able to adopt to constantly changing environment and unexpected changes to
the requirements and need of the program.

             Reading
this article reminded me why the waterfall model is being replaced by agile or
scrum, not able to meet the industry’s demand in changing environment. Using
cake to explain the new waterfall made the concept more understandable and memorable.
The example clearly showed what the 2.0 version tries to do in order to survive.
The concept of Waterfall 2.0 was particularly impactful because it demonstrated
that no methodology is entirely outdated. If they have the right adjustments,
even traditional approaches can remain relevant in the changing world. In the
future, I thought I might use waterfall methodology in a short and brief project
that what I had to do was very clear. However, finding out waterfall 2.0 I can
try to use it for future project. For example, it could be useful in projects
with fixed deadlines and regulatory requirements, where a hybrid approach
ensures adaptability and flexibility.

Link https://de.celoxis.com/article/waterfall-is-dead-long-live-waterfall-2-0

 

From the blog Sung Jin's CS Devlopemnt Blog by Unknown and used with permission of the author. All other rights reserved by the author.

API Endpoints, just so you know.

If you’re studying or working with Rest APIs, the chances of you running into endpoints are extremely high. I struggled a bit to wrap my head around them, so here’s a casual breakdown. It’s not as scary as it sounds.

What Are API Endpoints?

Let’s start with the basics. An API (Application Programming Interface) is like a menu at a restaurant(HOW MANY TIMES MUST I SAY THIS?). It tells you what’s available (services or data) and how to ask for it (requests). Endpoints are specific URLs on that menu where you go to get what you need.

Imagine you’re at a pizza place. You want to order a pizza with extra cheese and pepperoni. The endpoint is like the part of the menu that says, “Build Your Own Pizza.” It’s where your request (extra cheese, pepperoni) is processed and sent to the kitchen (server), and then your pizza (data) comes back to you.

In tech terms, an endpoint is a specific location on a server where an app interacts with the API to get or send information. For example:

https://api.pizzaplace.com/orders

Here, /orders is the endpoint where you place or check your pizza order.

How Do They Work?

Endpoints use URLs and HTTP methods to define what action you’re taking. These methods are the “verbs” of APIs:

  • GET: Asking, “What’s on the menu?” You’re retrieving information.
  • POST: Placing your pizza order. You’re creating something new.
  • PUT or PATCH: Updating your order, like adding mushrooms.
  • DELETE: Canceling your order. Sad day.

When your request hits the endpoint, the server processes it and sends back a response, often in JSON (an easy-to-read data format).

Why Do They Matter?

Endpoints make modern apps and websites possible. For instance, when you check Instagram, there’s an API endpoint fetching your posts. When you order on Amazon, there’s an endpoint processing your purchase. They’re everywhere!

Endpoints keep things organized. Instead of exposing a server’s entire functionality, APIs provide specific endpoints for specific actions. It’s like keeping the kitchen off-limits in a restaurant—you just see the front counter.

Real-World Example

Let’s say you’re building a library app. You might have API endpoints like these:

  • GET /books: Retrieve a list of all books.
  • GET /books/42: Get details about book #42.
  • POST /books: Add a new book.
  • DELETE /books/42: Remove book #42.

Each endpoint serves a purpose, and together they make the app functional.

Wrapping It Up

API endpoints are the way apps and servers talk to each other. They’re like doorways leading to the data and services you need. Understanding endpoints is crucial because they’re the foundation of so much of what we build. Whether you’re connecting a frontend to a backend or building your own API, endpoints are the unsung heroes of the digital world.

Next time someone mentions an “endpoint,” you’ll know it’s just a fancy name for a digital doorway.

References
https://www.geeksforgeeks.org/what-is-an-api-endpoint/
https://www.ibm.com/topics/api-endpoint

From the blog CS@Worcester – Anairdo's WSU Computer Science Blog by anairdoduri and used with permission of the author. All other rights reserved by the author.

Creating a Sprint Goal and Backlog

As a one-man Scrum team, a lot of the framework provided with Scrum and Agile can be hard to apply. For example, how do I define a sprint goal for my team when I am the team, or how do I determine how much work the team is capable of when again, I’m the team.

Shouldn’t being a single person scrum team make these easier to accomplish? I mean it would stand to reason yes as I don’t have to confer with others on a sprint goal and who better to know my own capabilities than myself. The issues arise in a few places.

The most important being, as someone who is new to scrum how will I know I’m setting a realistic or achievable sprint goal. How will I know I’ve chosen the right goal for that given part of development?

Another given issue is with being the one who sets the goal and the timeframe who’s going to keep my honest and working as hard as I can without burning out? I can push myself incredibly hard and burn out after one sprint or I could accomplish almost nothing because I just didn’t feel like it and didn’t have to answer to anyone.

Thankfully, the first issue can be solved by researching sprint planning. In “Creating a Sprint Backlog: Your Guide To Scrum Project Management” by Dana Brown, she details how to create a sprint goal, how to create a sprint backlog, and how to prioritize tasks.

She highlights the first two steps of sprint planning as setting a sprint goal and identifying important product backlog items. Thankfully this is where my first issue is solved. As someone inexperienced to scrum, I would start at step two which is identifying the important product backlog items and using those to create a sprint goal. This way my sprint goal is relevant and knocks off the items highest on the priority list.

From there I can breakdown my product backlog items into smaller tasks and add them to the spring backlog. Finally organizing these tasks based of their priority and prerequisite tasks.

So, my first issue has been resolved, I now have a method of creating a sprint goal relevant to what’s highest priority. As for my second issue, unfortunately I don’t think I’m going to find an answer to that one online. It’s going to be trial and error as well as being completely honest with myself on whether the workload is too much or too little. Ultimately, it’s going to come down to how disciplined I can be.

From the blog CS@Worcester – DPCS Blog by Daniel Parker and used with permission of the author. All other rights reserved by the author.

TPM Podcast With Arjun Subramanian: Burnout

I chose to write about this Podcast episode in a blog because it’s a topic that resonates with me personally. This episode speaks specifically about the effect COVID had, and how burnout relates to that, which is something that affected me heavily during the COVID year. This isn’t really a topic we discussed directly in class, but Arjun Subramanian is discussing this with Mario Gerard within a technical project management context, which I feel relates to the general theme of what we’ve been discussing thus far in class.

Something Gerard and Subramanian talk about is establishing a boundary between work life and home life. This comes naturally when you’re clocking in to work from 9-5, and forgetting all about work as you come home, but when working from home it becomes harder to mentally separate the two. Working from home in the tech industry has its undeniable benefits, and there’s a reason so many people in the field prefer it, but with increased productivity also goes hand in hand with increased effort, or at least prolonged effort. It becomes harder to create a boundary between work and life when there’s no physical separation between the two. This is something that impacted me heavily, because prior to this I always kept strict separation of my school life and family life. I never invited school friends over, I only did schoolwork in my room privately, or I would set aside time at school to do it, I never talked about school at home or talked about family at school. And while I was at UMass in 2019, I had a taste of freedom for the first time. School life and social life was blending together in a way that I had never allowed it to back home. Coming home from UMass due to COVID was a huge detriment to that, and having to manage my online courses with my tense familial struggles was something that wore me thin.

The podcast also discusses how toxic environments can contribute to burnout, and the necessity of having the agency to manage and create your boundaries. You can speak up, and you work as hard as you can within your specific timeframe, and you put your heart into it, and once that time is up you come back to earth and you don’t allow work to cross that boundary into personal life.

As far as toxic situations go, that doesn’t necessarily mean a difficult one. As Subramanian talks about, you can have a difficult situation that’s right, and you need to show “grit and tenacity”. But a toxic situation is one you need to walk away from, for your own sake. There may not be a clear way to identify a toxic situation from just a difficult one, but there are signs, and while they talk about some of the signs on the podcast, the bottom line is that you need to have your own well-being in mind as you work.

The discussion about boundaries and toxic situations is one that I feel are a major contributor to the way I was affected by burnout over the COVID years, and something I’m aware I need to look out for, now and in the future.

Podcast Link: https://www.mariogerard.com/tpm-podcast-with-arjun-subramanian-project-manager-burnout/

From the blog CS@Worcester – Justin Lam’s Portfolio by CS@Worcester – Justin Lam’s Portfolio and used with permission of the author. All other rights reserved by the author.

Why is Git popular among Version Control Systems

One of the interesting blog articles I found is about different version control system developers used, in the process of managing software over time. During this time, when we’ve been mostly using Git in this process, this article talks about different version control systems, other than Git, has been existed in the past. Two of the version control systems the article mentions are Apache Subversion (SVN) and Mercurial. The article gives the overview of existence from previous years, in terms of Apache Subversion, it’s the system that maintains source code in a central server, as well as how it works great for a centrally located team. And in terms of Mercurial, it has its own easy access for most developers to hosting through Fog Creek Software, which is now Glitch.

The reason I choose this blog post is to learn more about the existence of other version control systems that appear alongside Git, as well as the advantages of those systems, and how each of the systems appeared to be the top choice among the developers over time. When we only focus on Git throughout the course, I personally can understand the structure where everybody can fork, clone, and branches in writing code, then contribute to the change of the repository. I also learn that git is more easier to use when managing version control through issues, commits and pull requests, where I found it more interactive and highly valuable in teamwork and collaboration.

Therefore, for the other version control systems, although such as the structure in Apache Subversion is about the same as how we use Git, the dependent on a centralized SVN server could bring less agile when committing changes to the overall repository. According to Quentin Headen, in summary, the centralized SVN server will also require the network connection to be always running in order to commit changes to the repository, or otherwise you can’t commit at all. The second drawback that they also have mentioned, is the heavy branching system, where branches are difficult to remove, or it could be impossible to remove the branch at all. In my opinion, this is another clear perspective to learn that there are disadvantages when hosting a repository on a centralized server, while a distributed version control system would be preferred, giving the developers the flexibility when working on the codebase to address the issues that centralized version control systems occurred.

After reading this blog article, I learned more about the two types of version control system, which are centralized and distributed version control. Although Git is popular dues to its strong platform and built-in user base, others could choose the centralized system for enterprise teams in terms of scalability. In my opinion, it would still depend on which type of project should I work on, and choosing the preferred version control system will help me easier in keeping track of project developments, ensuring the version is up to date and accessible for all users.

Link to Blog Article: https://stackoverflow.blog/2023/01/09/beyond-git-the-other-version-control-systems-developers-use/

From the blog CS@Worcester – Hello from Kiet by Kiet Vuong and used with permission of the author. All other rights reserved by the author.

Our Approach to Testing – Rich Rogers

In this blogpost, Rich Rogers, a Testing Capability Lead for Scott Logic, discusses how the people there approach testing as part of their Development and Delivery process, particularly through their 6 principles. He heavily emphasizes context as the first principle, and how when testing, things will always change. You can’t standardize testing, because every project will require different tests. In many ways, this resembles the Agile ideas that we’ve discussed in class, specifically the portion about “Responding to Change over Following a Plan”, and also goes hand in hand with another one of Roger’s principles, which is “Risks over Requirements”. I completely agree that you may be given a set of requirements as a team, and it may be satisfactory for a customer to fulfill these requirements, but there is value in looking beyond just the requirements and exploring other risks or potential problems that may not have been stated. A plan exists as a guideline, not a strict rule.

Besides those two principles, the remaining ones are: “Value in Tooling”, “Quality for Humans”, “Bring a Testing Mindset”, and “Collaborate and Cross-skill”. To elaborate, the blog discusses Value in Tooling as understanding the tools you have, and taking opportunities to run repeatable automated checks when applicable, so long as they are efficient in terms of cost and effort. Quality for Humans refers to the notion that at the end of the day, humans will be the ones using these tools. The goal is to provide something that humans will be satisfied with, and will be accessible for a person to use. In some manner, this resembles the Agile principle of Customer Collaboration, or even the Individuals or Interactions part of Agile. The Testing Mindset principle is a little more broad, in that it refers to a questioning mindset that is aligned with wanting the product to succeed. Every tester has a unique way in which they approach testing, and so long as the end goal aligns, every mindset is valid. Collaborate and Cross-Skill here refers to the notion that, while the industry encourages individual testing, understanding your team’s skills and working to complement each other can be helpful.

Ultimately, I think these principles can be summed up as be flexible, very similar to how Agile works. A willingness to understand and use tools in testing, taking the human aspect into account, a willingness to approach things differently and apply a level of curiosity and questioning, and a willingness to collaborate with others, especially those with skills and expertise that vary from yours, are all examples of disregarding the rigid plans and processes. To do your best work, you must be willing to approach anything in multiple ways and with multiple mindsets. Having chosen this blog post because of it’s insight into testing, I definitely find myself agreeing with the overall principle behind this testing approach. I don’t know how much control I will have over testing in the future, but I would certainly like to apply a similar approach to how I test things in the future.

Blog Link: https://blog.scottlogic.com/2024/10/30/our-differentiated-approach-to-testing.html

From the blog CS@Worcester – Justin Lam’s Portfolio by CS@Worcester – Justin Lam’s Portfolio and used with permission of the author. All other rights reserved by the author.

Best Practices of REST API Design

I chose the blog post, “Best Practices for REST API Design” by John Au-Yeung because it addresses the best practices developers should be following when it comes to utilizing REST API. The blog shows us strategies that we can use so we can create items to the best of our abilities. In class, we have been using the REST API since Thea’s Pantry utilizes it. Due to this, while we have been learning a lot about it due to classwork and homework, it was interesting to be able to read other perspectives such as this blog. This is really our first introduction in our computer science classes on design like this so the more we can read and learn the better.

In the blog, the author focuses on creating user-friendly APIs that adhere to widely accepted principles. The foundation of REST API design is using nouns in endpoints to represent resources, such as /users or /orders, rather than actions like /getUser. This approach keeps the API intuitive and aligns with REST conventions. HTTP methods play a vital role, with verbs like GET , POST , PUT , and DELETE defining the operations on these endpoints. The principle of statelessness is key to this design, meaning each request from a client must contain all the necessary information for the server to fulfill it. This avoids maintaining client-specific state on the server, simplifying scaling and debugging. Error handling is another essential practice. APIs should return meaningful and consistent HTTP status codes, such as 404 for “not found” or 400 for “bad request,” paired with descriptive error messages to guide users on fixing issues. For managing large datasets, pagination, filtering, and sorting should be supported. These features enhance performance by limiting the data returned and allowing clients to specify exactly what they need. APIs should adopt JSON as the standard response format, as it’s widely used and easy to parse. Including appropriate content-type headers ensures compatibility across platforms. These practices foster better user experiences, maintainability, and scalability. By following them, developers can create APIs that are reliable, predictable, and efficient, promoting successful integrations across diverse client applications.

From the blog, I was able to learn the best practices when it comes to designing using REST API. Going forward, I plan to incorporate these practices as I continue to learn more about front end work. After reading, I feel like I will be able to increase my learning in this area as well as be able to share these practices with my peers.

https://stackoverflow.blog/2020/03/02/best-practices-for-rest-api-design/

From the blog CS@Worcester – Giovanni Casiano – Software Development by Giovanni Casiano and used with permission of the author. All other rights reserved by the author.