Category Archives: Week 13

Git: Merge conflicts

Week-13: 12/2/2024

This week in class we worked on merge conflicts and how to resolve them. I had to do an activity that had to do with solving a merge conflict. This experience was not as frustrating as I thought it was going to be. While doing the activity, Professor Wurst highlighted how important it is to understand version control systems like Git and to develop effective strategies for resolving conflicts collaboratively.

On this week’s blog hunt, I came across a helpful blog post by Sid Puri titled “Git Merge Conflicts,” which provided a clear explanation of what merge conflicts are, why they occur, and how to resolve them using Git tools. It broke down the process into manageable steps and even offered advice on how to prevent these conflicts from happening in the first place.

One of the key takeaways for me was understanding the root cause of merge conflicts: multiple developers making changes to the same file at the same time. Because Git can’t automatically figure out which changes to keep, it flags these conflicts and requires manual intervention. The post explained how to use Git’s notification system to identify conflicts and then how to manually merge code using conflict markers – those weird symbols like <<<<<<<, =======, and >>>>>>> that used to make my head spin.

The post also emphasized the importance of communication in preventing merge conflicts. This really resonated with me because our team conflict stemmed from two of us accidentally modifying the same section of code. If we had just communicated about our tasks beforehand, we could have avoided the whole issue. Moving forward, I’m definitely going to advocate for more frequent team check-ins and a more organized approach to task allocation.

What I really appreciated about the blog post was its practical approach to conflict resolution. It explained how to use built-in Git tools like git status and git diff to navigate conflicts with confidence. Mastering these tools will definitely save me time and frustration in future projects. Plus, learning how to handle and resolve conflicts collaboratively is a transferable skill that will be valuable in any team setting, not just software development.

Overall, this blog post was a great resource that directly complemented our coursework on team-based software development. It reinforced the idea that understanding and resolving merge conflicts isn’t just a technical skill; it’s an essential component of effective teamwork in software engineering. I feel much more prepared to tackle these challenges in the future and to contribute more effectively to my team projects.

Blog link: https://medium.com/version-control-system/git-merge-conflicts-4a18073dcc96

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Software Architecture Patterns

Week-13: 12/2/2024

Understanding software architectural patterns is critical in the software development industry for creating strong, scalable, and maintainable products.

A recent Turing blog post, “Software Architecture Patterns and Types,” has been useful in solidifying my understanding of this important concept. This article provided a comprehensive overview of various patterns, including monolithic, microservices, event-driven, layered, and serverless architectures. The article clearly gives explanations of each pattern’s design principles, advantages, and limitations.

For instance, while monolithic architectures offer simplicity, they often struggle with scalability. On the other hand, microservices excel in scalability and allow for independent deployment but can introduce complexity in maintenance and debugging. The article also explores emerging trends like serverless architecture, emphasizing their importance in modern cloud-based systems.

The practical examples and concise explanations in the article made it extremely relevant to what I learned in my classes, particularly my software construction, design, & architecture class. The discussion on system scalability and maintainability directly aligns with the topics we’re covering.

One of the most valuable takeaways for me was the emphasis on aligning architectural decisions with business objectives. The article effectively illustrates that a microservices architecture, while attractive for its scalability, might be overkill for a small-scale project. This resonated strongly with my recent experience in a group project where we debated between microservices and a layered design. Reflecting on the deployment and dependency management challenges we faced, the article validated our decision to opt for a simpler layered design as a better fit for our project’s scope.

Furthermore, the article’s discussion of serverless architecture was truly eye-opening. I had previously held a somewhat simplistic view of serverless as a universal scaling solution. However, the article shed light on its potential drawbacks, such as vendor lock-in and latency issues. This more nuanced perspective will undoubtedly inform my approach to future projects, encouraging me to critically evaluate new trends before jumping on the bandwagon.

Moving forward, I intend to apply this knowledge by diligently assessing the specific needs and constraints of each project before selecting an architectural pattern. For instance, when tackling a high-traffic e-commerce site, I would now consider employing an event-driven architecture to effectively handle asynchronous data flow. Alternatively, for smaller projects, I would advocate for a monolithic or layered approach to minimize overhead.

By understanding the trade-offs inherent in different architectural patterns, I feel better prepared to design and build software systems that are not only functional but also scalable, maintainable, and aligned with business goals.

Blog link: https://www.turing.com/blog/software-architecture-patterns-types

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Understanding Clean Code in Reagant with the Clean Coders

This week, I explored the blog post “Mastering Reagent: Finding the Balance Between Readability and Performance” on Clean Coders. This post delves into the challenges of balancing readability and performance when using Reagent, a ClojureScript library for building user interfaces. It highlights techniques to maintain code clarity while optimizing performance, especially in interactive web applications.

The author discusses common pitfalls, such as overusing lifecycle methods, creating unnecessary computations, and mishandling state updates, all of which can lead to unresponsive or hard-to-maintain code. The blog suggests best practices, including leveraging idiomatic ClojureScript constructs and modularizing components to enhance both readability and runtime efficiency. Practical examples include minimizing expensive computations by using reagent.core/track and ensuring that components don’t re-render unnecessarily with reagent.core/shouldComponentUpdate. The author also emphasizes that while performance is crucial, readability often has a long-term payoff, especially in collaborative environments where maintainable code saves time.

I chose this blog post because our course has emphasized the importance of clean, maintainable code in software development and maintenance. These lessons have led us to balance the trade-offs between performance and code clarity, and understanding how to achieve that in application is very important. Additionally, we’ve explored a lot of similar clean code attributes in class that are expanded on within this post and tied to the context of UI frameworks.

What stood out to me most was the emphasis on maintaining readability without compromising performance, a principle applicable across programming domains. For instance, I’ve sometimes been tempted to optimize prematurely, leading to messy code that became hard to debug or modify later. This post reinforced the idea that readable code doesn’t just benefit the developer in the moment but also improves team productivity and ensures the application can be scaled or updated more easily in the future.

One of the key takeaways for me was the use of tools like track to manage expensive computations efficiently. I had not considered how reactivity frameworks like Reagent allow for targeted optimizations without sacrificing clarity. Moving forward, I plan to apply this principle in my projects by carefully identifying bottlenecks and ensuring that optimizations are implemented only where they provide tangible benefits.

This material has given me a new perspective on how to approach UI development. While I’ve worked primarily with simpler frameworks, I now see how the same balance between readability and performance applies universally. Whether I’m working with Reagent, React, or any other tool, the insights from this post will help guide my decision-making and ensure I focus on long-term maintainability as well as immediate efficiency.

Overall, this blog post offers practical advice for developers working with Reagent or similar UI libraries. I highly recommend reading it for anyone interested in crafting user interfaces that are both efficient and easy to maintain. The post can be found here: “Mastering Reagent: Finding the Balance Between Readability and Performance.”

From the blog CS@Worcester – CS Journal by Alivia Glynn and used with permission of the author. All other rights reserved by the author.

Scrum

Hello, and today’s blog post is about Scrum. For those of you who do not know, Scrum is basically a way to work as a team efficiently and effectively. I am glad that Professor Wurst chose to expose us to Scrum in his curriculum this semester because this method of teamwork is very interesting, and it actually seems like it could be useful to me in the future. The thing I really like about Scrum is that it is INCREDIBLY organized. Each day is planned out to an hour, and I think that having a schedule that is organized and efficient is very important when you actually want to get work done. Planning out your schedule/agenda ahead of time ensures that everyone on the team stays on track with minimal distractions. This method also is all about improvement. There are reflections/retrospectives at the end of every sprint (which is pretty much another way to say “project”). We went over this in detail during our class time, which engraved a lot of the Scrum knowledge in our heads. We also took official quizzes which helped us become “certified.” I think taking these Scrum tests were beneficial because we had to reach a certain score, and if we didn’t get it, we would have to re-do the tests until we had a much better understanding of the concepts.

Scrum uses the Agile framework, which I talked more about in one of my previous blogs. Scrum is based off of three pillars: transparency, inspection, and adaptation. The official Scrum website described the process of working using Scrum as “working through small experiments, learning from that work and adapting both what you are doing and how you are doing it as needed.” Scrum also has important values: courage, focus, commitment, respect, and openness. These values are each crucial to making this process work.

The article I chose to help me explain Scrum for this blog is linked here: An Introduction to Scrum | Lucidspark

This article describes Scrum as “an iterative, adaptable approach to software development.” It talks about how most software development methodologies are very linear, meaning there is not much room for improvement. Scrum on the other hand, as I mentioned before, follows the Agile approach rather than the Waterfall methodology, which makes for great adaptability. The article also re-iterates how Scrum is all about teamwork and collaboration. It goes into detail about each role in the Scrum team, and what all the events are in the sprints. I think this article is worth a read, and it is supposed to be just an 8-minute read as well.

Overall, Scrum is definitely one of the best software development methodologies that are in sync with Agile ideologies, so I am glad that we learnt about it!

From the blog cs@worcester – Akshay&#039;s Blog by Akshay Ganesh and used with permission of the author. All other rights reserved by the author.

Week-13 Post

This week’s post will cover REST APIs, Representational State Transfer Application Programming Interfaces. One of the main key principles of RESTful APIs is the seperation between the frontend UI the user interacts with and the backend server. Postman’s blog highlights this as, “The client’s domain concerns UI and request-gathering, while the server’s domain concerns focus on data access, workload management, and security”. The primary purpose of REST APIs is to allow different software systems to interact and exchange data over the web. REST mainly focuses on stateless communication, where each request from a client contains all the information needed for the server to process it.

REST APIs use HTTP methods and standard URL structures to enable communication between clients and servers. HTTP methods play an essential role in REST APIs. These methods correspond to CRUD (Create, Read, Update, Delete) operations in software. The POST method is used to create, while GET retrieves data from the server. PUT and PATCH are used to update existing data, with PUT replacing the entire resource and PATCH modifying specific parts. DELETE removes data. In addition, REST APIs use status codes to indicate the outcome of an operation, For example, a 200 status code indicates a successful operation, 201 signifies resource creation, 404 means a resource was not found, and 500 represents a server error. Including appropriate status codes in API responses helps clients understand the results of their requests and handle errors effectively.

The blog post I researched by Postman highlights how REST is widely used across various industries. For example, e-commerce platforms use REST APIs to manage product information and process orders. Social media applications utilize REST APIs to handle user profiles and posts. Cloud services often provide REST APIs to allow developers to interact with their resources programmatically. The blog also mentions another type of API called SOAP, standing for Simple Object Access Project. SOAP is considered a protocol, while REST is considered a set of guidelines. Unlike REST which uses methods like JSON, URLs, and HTTP, SOAP uses XML for sending data. One of the main reasons why SOAP might be preferred over the more popular REST is because SOAP supports WS-Security, which provides a framework for securing messages, including encryption, digital signatures, and authentication. This makes SOAP more suitable for applications handling sensitive data. Corporations like banks and hospitals dealing with sensitive user information could utilize to prevent information breaches.

These APIs provide a consistent way for systems to interact and exchange data while adhering to a set of well-defined principles. By understanding HTTP methods, status codes, and data formats, developers can create APIs that users can understand and use.

Blog: https://blog.postman.com/rest-api-examples/

From the blog CS@Worcester – Computer Science Through a Senior by Winston Luu and used with permission of the author. All other rights reserved by the author.

Code Review

Source: https://about.gitlab.com/topics/version-control/what-is-code-review/

This article is titled “What is a code review?” As clearly stated by the title, the article explains the processes of code reviewing. “A code review is a peer review of code that helps developers ensure or improve the code quality before they merge and ship it.” Code reviews help in the identification of bugs, increase the overall quality of code, and enhance understanding of the source code. Code review, as suggested in the name, happens after a software developer has finished coding. Code needs to be checked before it is merged into an upstream branch for bugs or conflicts. A code reviewer “can be from any team or group as long as they’re a domain expert. If the lines of code cover more than one domain, two experts should review the code.” Adhering to a solid code review process allows for continuous improvement of code and aims to ensure that faulty code isn’t being implemented for customers/users to see and use. This process isn’t just important for the code itself, but also for all of the team members of a software development project. Whilst reviewing the code, meaningful knowledge of the source code is shared between team members to ensure that it is being implemented properly. The main benefits of the code review process are: the sharing of knowledge, discovering bugs earlier, maintaining compliance, enhancing security, increasing collaboration, and improving code quality. Code reviews allow for maintaining compliance because different developers have different backgrounds and thus different personal processes when they are developing. Code reviews allow these people to get together and maintain a standard coding style. Security is enhanced because “security team members can review code for vulnerabilities and alert developers to the threat. Code reviews are a great complement to automated scans and tests that detect security vulnerabilities.” There are many benefits to code review, but there are some disadvantages, including: longer time to ship, focuses being pulled from other tasks, and large reviews mean longer review times. These can be described as necessary evils due to the sheer amount of positives that code reviews offer in software development.

I chose this article because it was published by GitLab, a software that we are heavily using in class for version control, and I thought that it would be interesting to read this specific topic from the syllabus. Version control softwares such as GitLab allow code reviews to happen, so diving deeper into the topic in an article published by this popular software company was tempting. Before reading this article I understood that code reviews were important to pinpoint any bugs or difficulties before merging code into the upstream, but I never really thought about the implications of security or different development styles. I’ll definitely keep this information in mind during future code reviews on the job to remind myself that bugs aren’t the only important thing during a code review.

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

YAGNI

Source: https://www.geeksforgeeks.org/what-is-yagni-principle-you-arent-gonna-need-it/

This article is titled “What is YAGNI principle (You Aren’t Gonna Need IT)?” YAGNI is “a principle in software development that suggests developers should only implement features that are necessary for the current requirements and not add any additional functionality that might be needed in the future.” The reasoning for this is that if you add features that might potentially be needed in the future, there will be risk for more bugs, increased complexity, and increased times of development, thus leading to increased cost. The YAGNI principle is similar to the KISS principle (Keep It Simple, Stupid), which also advocates for simplicity, it encourages developers to avoid complexity when it isn’t necessary. Developers should follow the YAGNI principle if they wish to keep the following costs in mind: the cost of building, delay, carry, and repair. The cost of building refers to the total cost of efforts and resources implemented in the project. Building things that aren’t needed leads to increased costs overall. Cost of delay refers to missed opportunities, if you spend time on unnecessary features, the development of more important ones will inevitably be delayed. Cost of carry refers to the difficulties of having unnecessary complex features. These complexities make it difficult to work on other parts of a software project, require more time, lead to an increased cost, and overall cause harder times moving forward. Lastly, the cost of repair, or technical debt, refers to the costs associated with bugs or mistakes that occur during the development process. YAGNI is important to ensure that the development process is focused, efficient, and cost-effective. YAGNI can be implemented into your code by prioritizing communication between team members. Ensuring that necessary requirements are met, a simple plan is made, ignoring ideas that don’t meet goals or deadlines, and keeping good records of project progress will allow your team to follow the YAGNI principle. YAGNI allows for simplicity, faster development, flexibility, reduced risk, and cost savings by complementing other development principles while prioritizing unnecessary implementations.

I chose this article because I appreciate how geeksforgeeks simplifies topics within the software development community. I don’t recall this principle being explicitly mentioned in class, but we have definitely alluded to it and I thought it’d be beneficial to read about it more, considering that it is in the syllabus. It was interesting to learn that the YAGNI principle complements other software development principles, such as the KISS principle, and compiles them into a unique principle that prioritizes simplicity over complexity and more features. It embodies the idea of “less is more.” This is a great set of guidelines I’ll be sure to follow in industry because it promotes that sometimes less work isn’t a bad thing. Instead of creating a multitude of features, ensuring that the ones that are critical, and required sooner, are being developed, will still get the job done.

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

Understanding The Waterfall Methodology

At the start of the semester, we talked about Software Methodologies, and one of the topics we covered was the Waterfall Methodology. The article I chose focuses on the Waterfall Methodology, which emphasizes completing and approving each phase before moving to the next. This approach helps reduce errors, making the process more time-efficient and improving project documentation.

The Software Development Life Cycle (SDLC) is divided into six phases: requirement analysis, software design, planning, software development, deployment, and testing. When the Waterfall Model is applied to the SDLC, it adds structure by defining the time and duration for each phase. This helps developers and teams better understand how long the process will take and what to expect at each step.

The article also explains when the Waterfall Model is the best choice for a project. Since every methodology is different, this model works well for projects with clear concepts, less complexity, a solid and unchanging plan, and minimal room for errors (no repeated phases). It’s commonly used in industries like healthcare and banking. The article also discusses the advantages and disadvantages of the Waterfall Methodology.

One advantage I really like is that each phase is completed one at a time, providing a clear and straightforward plan for programmers to follow. However, a disadvantage I find inefficient is that progress can be slow, which might make it harder to estimate when the project will be completed.

The article outlines the distinct phases of the Waterfall Model:

  • Requirement Analysis: Gathering and documenting what the software needs to achieve.
  • System Design: Translating requirements into system specifications, including hardware and architecture considerations.
  • Implementation: Developing individual units or components based on the design.
  • Testing: Integrating and rigorously testing components to find and fix issues.
  • Deployment: Releasing the completed software to clients or the market.
  • Maintenance: Providing ongoing support, including patches, bug fixes, and updates.

I chose this article because it provides a clear and concise explanation of the Waterfall Model. I also wanted to learn more about this methodology and how it differs from others. Waterfall is one of the more popular choices among engineers and developers, and now I understand why. Its structured approach makes it appealing for certain types of projects, even though it may not always be the fastest.

Looking ahead, I want to apply what I’ve learned about the Waterfall Methodology to my future career as a software engineer and projects like my Software Capstone class next semester. Understanding how to plan and organize phases systematically will be important for delivering a successful project. I also want to explore other software methodologies to understand their strengths and weaknesses. Knowing when and what type of project each methodology works best for will help me decide how to approach different challenges in my career.

Source:
What Is The Waterfall Model?

Citation:
Team, Codecademy. “What Is the Waterfall Model for Software Development?” Codecademy Blog, 7 June 2024, http://www.codecademy.com/resources/blog/what-is-the-waterfall-model/. 

From the blog CS@Worcester – CodedBear by donna abayon and used with permission of the author. All other rights reserved by the author.

The Wisdom of the White Belt

The “White Belt” pattern resonates with me on a personal level, as it summarizes the mark of maintaining a beginner’s mindset, a quality that I believe is crucial for continuous growth and learning in any field, especially in the realm of software development.
One interesting aspect of this pattern is the acknowledgment that expertise and mastery can sometimes become a double-edged sword. While we try to have proficiency in our field, there is a risk of becoming proud or developing a fixed mindset, where we rely too heavily on our existing knowledge and fail to embrace new perspectives or paradigms.
The analogy of “wearing the white belt” serves as a sad reminder that true mastery is not in holding to what we already know but in building a mindset of humility and openness to learning. It challenges us to overlook our assumptions and approach new challenges with a fresh and curious mindset, to a beginner’s eagerness to explore and discover.
What particularly resonates with me is the idea of “unlearning what we have learned,” as emphasized in the quote from “The Empire Strikes Back.” (Favorite movie haha) This concept challenges the work that our knowledge is lively and pushes us to question our assumptions and be willing to adapt and evolve our thinking as we meet new contexts or technologies.
Furthermore, the pattern’s point on approaching new domains with respect and curiosity, rather than assuming expertise, is rather more of a debate. The reason I say so is because it reminds us that true understanding often comes from collaborating with others and acknowledging the unique perspectives and realities they bring to the table.
One interesting aspect I find compelling is the idea of questioning how veterans in the field approached coding in the past versus how we approach it now. While we may be tempted to dismiss older methodologies or technologies as outdated, there is value in understanding the historical context and the challenges that shaped those approaches. By adopting a beginner’s mindset, we can explore these historical perspectives with an open mind, potentially uncovering insights or principles that remain relevant today.
In spirit, the “White Belt” pattern encourages us to have a mindset of continuous learning, humility, and adaptability – qualities that are essential for being advanced in the landscape of software development. It reminds us that true mastery is not a destination but a lifelong journey of growth and exploration, where we must be willing to use our preconceptions and embrace the wisdom of a beginner’s mind.

andicuni
May 5, 2024

From the blog CS@Worcester – A Day in the Life as a CS Blogger by andicuni and used with permission of the author. All other rights reserved by the author.

The Power (and Curse) of Retreating into Competence

This week, I chose to read the “Retreat into Competence” pattern and found its overall message quite interesting since I see it as a helpful but potentially harmful one. It suggests that when faced with challenges that can leave me feeling overwhelmed or upon realization of my limitations, I should temporarily retreat into known territory to prepare myself to confront the unknown. Doing so will enable me to rebuild my confidence and be able to prepare myself to tackle any challenges that may lie waiting in my future career. While retreating can be helpful, it may end up being harmful and the pattern also highlights the importance of setting time limits for retreat to avoid it from becoming an issue. Setting time limits and asking for help from mentors will ensure if I feel that I must retreat into competence in the short term, it won’t lead to the stagnation of my development in the long term.

I believe that setting time limits for a retreat into competence is vastly important in keeping one’s journey into evolving from an apprentice on the proper path. It works toward preventing people from becoming complacent or stagnant when returning to their competencies. Placing limits encourages an environment where one can gather thoughts and pool together any necessary resources before returning to the thick of it promptly. Asking for help from mentors is also important in aiding to overcome obstacles. Even if something may seem overwhelming at first approach, with the guidance of others, such may appear a lot more manageable than it had before.

I think the pattern provides a good framework on how to deal with my limited knowledge due to my lack of practical experience. It’s common for anyone to feel overwhelmed by new things, especially in the early stages of learning or taking on new challenges upon transitioning from a learning environment into a professional environment.

I have to also mention that the notion of retreating into competence can be helpful but it also may be harmful if not done correctly. It may be helpful, but making a habit out of retreating may result in taking fewer risks even when establishing time limits. I feel as time goes on, the necessity of retreating may fade away, but if a habit of retreat is developed and relied upon too much, it may do more harm than good in the long run.

Overall, I think the “Retreat into Competence” pattern sets a solid framework for new apprentices to follow by destigmatizing the fear of the unknown. It shows retreating when faced with a challenge can be beneficial if used correctly.

From the blog CS@Worcester – Eli&#039;s Corner of the Internet by Eli and used with permission of the author. All other rights reserved by the author.