Category Archives: CS-348

Learning about Git

CS-348, CS@Worcester

In class we are going over how to use Git and not cause conflicts with upstream. First we learned about how to create a copy of the upstream in the cloud by forking the repository. Then we cloned the repository into our local machine to start using the code. We cloned the code into our local machine. Then, we learned how to make branches of the code. This allowed us to start making changes. 

In my off time I started to learn more about how branches are very important for group projects. For example, if someone makes a change on the main branch and sends it to the upstream, there might be no conflict at first. However, if someone else commits changes to the upstream, then conflicts happen. I was reading in the class textbook and some online articles. They stated that it is better for the group if people send commits to the fork first. In my opinion, this practice helps streamline the process. Let me explain further. What if your coworkers want to see what changes you made before it gets committed to the upstream? They would have to look at the fork copy of the upstream.

This lets the team determine if the changes you made are actually good. Otherwise, they will know if it needs to be changed again. This would allow your team to cut time. It would enable them to complete the project or the product. This ensures it will be ready for the public.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

The Most Useful Tool In a Developers Toolkit: Development Environments

Intro

Choosing a development environment to use is a decision that can be made based on feeling, or by taking the time to think out each choice and analyze which best fits your needs. Either way the environment that a developer uses is what is super important as it’s where all code is made in any project, making it the tool that every developer spends the most time using. It’s a personal choice and this blog by Matthew LeRay goes over everything you need to know about developer environments.

Summary of Source

This blog encompasses all that is needed to know about developer environments, including their purpose, importance, and what IDE’s are with some examples. The main sections are:

  1. Definition and Purpose: A structured setup of tools and processes that enhances software creation by automating tasks, supporting debugging, and ensuring consistency with production.
  2. Types of Development Environments: Explains the purpose and distinct roles of development, testing, staging, and production environments.
  3. Integrated Development Environment (IDE): The evolution of IDE’s and what they offer such as speed/efficiency and customizable features.
  4. Setting up a Development Environment: Goes through the steps of configuring your environment, from choosing the IDE, to configuring it and using tools like build automation.

The Reason I Chose This Source

For any new programmer, looking for an IDE to use can be confusing because of the lack of knowledge about what they even are, mixed with the daunting task of choosing one to learn and use. I chose this blog because it bridges that gap of being a new programmer having no idea what a developer environment even is, to choosing and setting up their IDE. It’s a very reader friendly resource, that even some experienced developers could learn from.

A Reflection of IDE’s

I personally use Visual Studio Code for the majority of what I program, but have used IntelliJ as well. I chose my IDE more based on appearance and general word of mouth, which is why I gravitated towards VS code, as it’s arguably the most popular and user-friendly IDE. I do like IntelliJ as it feels good to use, and although a drawback to others might be it’s a java only IDE, I only use java so that isn’t a problem for me. VS code also has a great variety of personalization options because of the extensions tab in the IDE. Extensions are great for not only appearance but also functional improvements. I think extensions are a big reason VS code is so popular, as well as its ability to support many languages, not restricting it to one the same way IntelliJ does. An IDE encompasses a ton of different tools a developer uses, so picking one that fits your needs is important. Becoming comfortable and familiar with the IDE you use is more important than switching to the “best” IDE for a developer based on some abstract metrics that others believe is the most important thing to have in an IDE.

My Future IDE Plans

I think I will continue to use VS code for now, but I can see myself trying out more technical and not-so user friendly IDE’s like Vim in the future. There really isn’t a need to switch it up if it’s working and honestly I don’t think it should be switched often. I will also probably utilize IntelliJ more as I do think it’s the best IDE for java which is the language I use most often.

Citation

Understanding Modern Development Environments: A Complete Guide by Matthew LeRay

From the blog CS@Worcester – The Science of Computation by Adam Jacher and used with permission of the author. All other rights reserved by the author.

Transparency and Autonomy: Better Together

In continuing my research on team management strategies, I delved deeper and more specifically into the Software Development side of team management. In doing so I discovered the scrum.org blog, which has many different articles based on understanding Scrum. Two of the most important principles in Scrum are transparency and autonomy, and I wished to delve deeper into understanding how to achieve those in a team setting. The article I found explained how those two play into each other greatly. The article’s title is Transparency and Autonomy: Two Sides of the Same Coin by Sanjay Saini

The article begins by explaining agile’s fast-paced style of producing working code. How autonomy and independence can be essential for fast results. The article explains that in Agile, teams seek this autonomy to make decisions and deliver value without excessive oversight. Transparency can become essential for fostering this autonomy. The article explains that by making work visible, tracking progress, and openly addressing challenges, more autonomy and trust can be given. The article highlights five key points to help allow for this trust and efficiency:

  1. Visibility Creates Trust: By sharing progress and challenges during Scrum events like Daily Scrums and Sprint Reviews, it shows that the team is accountable and can be trusted to be autonomous.
  2. Transparency in Challenges Leads to Solutions: Being open about struggles encourages collaboration and problem-solving, proving the team can manage setbacks and seek out help when they need it independently.
  3. Data-Driven Transparency Builds Confidence: Using metrics like velocity and burndown charts shows consistent results, building leadership confidence in the team’s capability.
  4. Transparency Causes Better Decision-Making: When a team has full visibility into goals, priorities, and feedback, they can make informed decisions independently. Information needs to be freely shared for autonomy to occur, and good decisions.
  5. Open Communication Builds Long-Term Autonomy: Regular, open communication about decision-making processes helps cultivate trust and secure more autonomy over time, as the team can continue to build trust through constant demonstration of these values.

The article concludes by saying that, transparency creates a culture of trust and accountability, enabling Scrum teams to earn the autonomy needed to make decisions and drive value.

This article helped me understand the importance of these values to a Scrum team’s operation. This is a key step in understanding Scrum’s importance in the operation of a team, as things such as transparency can make a smoother work environment for everyone by providing autonomy. Next in my blog, I will look into articles relating to development environments such as Docker or GitPod and their importance for maintaining a productive team.

Source:

https://www.scrum.org/resources/blog/transparency-and-autonomy-two-sides-same-coin

From the blog CS@Worcester – WSU CS Blog: Ben Gelineau by Ben Gelineau and used with permission of the author. All other rights reserved by the author.

Merge Conflicts

I believe that using systems like Git are an important tool for developers. Yet, one of the more challenging aspects of working with Git is resolving merge conflicts, an occurrence in collaborative projects. For this blog entry, I chose to review the Graphite guide on resolving merge conflicts. This resource provided a clearer, step-by-step approach to handling merge conflicts, and I found it both insightful and practical after learning it through homework and in class.

Guide of Merge Conflicts

The guide explains the basics of merge conflicts in Git, outlining what they are and why they occur. It details the types of conflicts, these arising from edits in the same line of code or overlapping changes across different branches. Resolving these conflicts using Git commands like git status and git diff to identify issues and git merge to bring changes together. The guide concludes taht with best practices to prevent merge conflicts, such as pulling the latest changes regularly, using feature branches, and maintaining clear communications within a team.

Why I Chose This Resource

I chose this resource cause it was a little confusing at first. After reading/researching multiple articles and websites like this one it refreshens your knowledge. Now I know that merge conflicts are a just not a concept we’ve discussed, but we learned about the importance of version control in collaborative coding environments. We learned how tools like Git enable teamwork by allowing simultaneous contributions, but we also explored how conflicts can arise when changes overlap. Despite this, this has to be one of the most stressful aspects of group projects.

Personal Reflections and Insights

Reading this guide helped de-reconstruct merge conflicts. I particularly liked the detailed explanations of the commands, as it’s easy to misuse or misinterpret them when under pressure or when you are clueless. While I’ve often focused on “fixing the conflict,” I’ve ignored on verifying how the changes interact, which has caused issues in past projects.

Another valuable takeaway I think was the important of adopting preventive measures. In class, we learned about best practices like pulling changes frequently and using feature branches, but this guide provided additional context that made these tips feel more somewhat actionable.

Future Practice

I want to apply this knowledge in upcoming group projects. Whether working on a shared repository for class or contributing to open-source projects, knowing how to resolve merge conflicts efficiently will save time and reduce confusion. This guide also inspired me to explore additional tools, like Visual Studio Code’s merge conflict interface, to streamline the process further. By combining these technical skills with teamwork, it will be better prepared to contribute effectively in collaborative environments understanding resolving merge conflicts.

https://graphite.dev/guides/how-to-resolve-merge-conflicts-in-git

From the blog CS@Worcester – function & form by Nathan Bui and used with permission of the author. All other rights reserved by the author.

Masters in Scrum

One method I’ve encountered repeatedly in both my coursework and during discussions with peers is Agile—specifically, the Scrum framework. To better understand it, I recently read an article titled “Scrum Mastering the 3 Pillars, 5 Values, and 7 Key Principles of Agile Project Management”, which provides a clear breakdown of how Scrum works and why it’s so effective in software development. I found this resource insightful, and it’s something I can definitely apply in my future

The article explains the fundamental elements of Scrum, which include the 3 Pillars, 5 Values, and 7 Key Principles that form the foundation of this Agile framework. The 3 Pillars—Transparency, Inspection, and Adaptation—ensure that the process is open, regularly assessed, and flexible. The 5 Values—Commitment, Courage, Focus, Openness, and Respect—help create a collaborative and supportive team environment. Finally, the 7 Key Principles emphasize continuous improvement, self-organizing teams, and the importance of simplicity in problem-solving.

I selected this article because, as a beginner in computer science, I wanted to understand how project management frameworks like Scrum can be applied in real-world software development. Being new to coding and programming, I often feel overwhelmed by the amount of information and tools available. Scrum, with its structured approach, offers a clear way of organizing tasks, fostering teamwork, and ensuring that progress is continually monitored. Learning about Scrum is relevant to my future career because it’s widely used in the tech industry, particularly for software development and managing complex projects.

From reading the article, I gained a solid understanding of the core principles that make Scrum effective. The 3 pillars stood out to me, especially Transparency. As a student, I can relate to the importance of transparency in team projects where communication is key to understanding who’s doing what, when, and how. Inspection and Adaptation also made me realize how crucial it is to frequently check our progress and be willing to change course when necessary, which can save a lot of time and effort in the long run.

The 5 Values were a reminder of the importance of collaboration and maintaining a positive, respectful team environment. These values are essential, not just for Scrum but for any professional setting. I particularly appreciated the focus on Courage, which resonated with me as I’m still learning how to approach new and challenging problems in my coursework.

Finally, the 7 Key Principles reinforced the idea of simplicity and the need to avoid overcomplicating solutions, something I’ve noticed in my own work when I get caught up in trying to build complex solutions rather than focusing on what’s truly necessary.

I plan to apply the principles of Scrum, especially the importance of adaptation and simplicity, in my future projects. Whether it’s a group coding project or individual work, Scrum’s emphasis on regular inspection and continuous improvement will help me ensure that I’m always learning and adjusting as I go.

Resource:

“Scrum Mastering the 3 Pillars, 5 Values, and 7 Key Principles of Agile Project Management”

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.

Software Maintenance

Source: https://www.geeksforgeeks.org/software-engineering-software-maintenance/

This article is titled “Software Maintenance – Software Engineering.” Software maintenance “refers to the process of modifying and updating a software system after it has been delivered to the customer.” There are many different aspects involved in this including: fixing bugs, adding new features, and keeping up with new hardware and software requirements. Maintenance is very important for ensuring that software is able to last long. This process can be expensive and complex, so these factors must be taken into account during the planning of a software development project. The important tasks in regard to software maintenance are: bug fixing, enhancements, performance optimization, porting and migration, re-engineering, and documentation. Summarizing these tasks, it is important to find and fix errors quickly, add new features/improve existing ones, improve the performance of the software, adapt the software to run on different hardware, improve the design, and maintain accurate documentation of all of these processes. There are quite a few different types of software maintenance, but they can be categorized into proactive and reactive types. “Proactive maintenance involves taking preventive measures to avoid problems from occurring, while reactive maintenance involves addressing problems that have already occurred.” Maintenance can be done by stakeholders, the development team, a third-party, and they can be both planned or unplanned. Planned maintenance can be described as regular maintenance (bug fixes) while unplanned maintenance can be described as reactive maintenance that occurs when something unexpected happens. Maintenance can fall into these different categories: corrective maintenance, adaptive maintenance,  perfective maintenance, and preventive maintenance. Corrective refers to fixing bugs and enhancing performance of the system. Adaptive refers to modifications being made when a customer needs the software to run on a different system. Perfective refers to the adaption of the software when a customer has a demand. Lastly, preventive maintenance refers to modifications that focus on the prevention of future issues with the software. Software maintenance is important but there are some things to consider: the cost, complexity, possibility of new bugs, users not updating the software, compatibility, technical debt, and end-of-life (where maintenance isn’t possible anymore or cost-effective).

I chose this article because I found it in the syllabus and thought the topic to be interesting. We are always learning about the development of software, but the idea of maintaining it over the long term isn’t as heavily considered. A large part of the work of a software development team is to obviously develop software but it is also important to learn about how it can maintain a sense of longevity free from error and customer complaints. I will keep the information I learned from this article in mind in future projects and when I’m working with a team to ensure that I’m developing software all the while keeping maintenance in mind. If it is considered during the development process, the maintenance process will be much easier.

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

Agile and its Shortcomings

https://www.codingame.com/blog/agile-failed-peek-future-programming

This blog post by CodinGame provides a short history of development methodologies and goes on to make a critique of specifically Agile. It describes how, despite how widespread the methodology has now become, Agile has generally not succeeded as a methodology because of how it has been implemented by corporate management teams. While Agile as a methodology strives to be a set of principles that should guide a team to good practice and a healthy work environment, non-programmers use it as a tool to enforce hierarchical structures and rigid development. Most of what is said can likely also be applied to Scrum, but it is not explicitly mentioned.

This blog interested me because when I learned about Agile and Scrum, I always thought to myself, “Why would you ever not choose these methodologies? These seem far superior to outdated methods like Waterfall.” However, this post opened my eyes to how Agile really only works when implemented as was expected by those who wrote the manifesto. This post makes it very clear that what makes a methodology successful, or a team successful in general, is understanding its intent and being able to reflect on if its intent aligns with the work style of the team in question. Generally, I feel that if you’re a business leader who wants to have a rigid plan, then you should just follow a rigid plan like that of Waterfall, rather than creating a fake team experience with a smoke-and-mirrors version of Agile.

The post helped me reflect on what to look for in a well-functioning team. I think these insights can be very valuable for someone when they are looking for a place of work, as when you apply these critiques as a tool to analyze employers, it may be very apparent at some point in the process when a team is run by a group of developers and when a team is run by non-programmers enforcing a strict hierarchical system of development. I think this kind of resource would also be useful if ever in a position where one’s input is valued when evaluating how a team should handle itself, as it can be helpful in recognizing what are good tendencies in a team and what are bad tendencies, especially in a leadership position where hearing the whole team’s voice can be valuable. Being able to express why a decision may be bad is not only valuable for working in a team but also for working under management, as articulated thoughts may be enough to have an impact on their perspective as well.

This blog highlights the importance of understanding and respecting the intent behind methodologies like Agile; it serves as a notice of how we need to hold ourselves and team leaders accountable for how a team chooses to go about development.

From the blog CS@Worcester – CS ZStomski by Zachary Stomski and used with permission of the author. All other rights reserved by the author.

Optimizing Docker in the Cloud

After our recent studies relating to Docker managing dependencies and ensuring consistent development environments, I was interested in learning more about how to use it because I thought something like this could have saved me many hours of troubleshooting while completing a recent research project. This article, written by Medium user Dispanshu, highlights the capabilities of Docker and how to efficiently use the service in a cloud environment.  

The article focuses on optimizing Docker images to achieve high-performance, low-cost deployment. The problem some developers run into is having very large images which slow the build processes, waste storage, and reduce application speeds. I learned from this work that large images result from including unnecessary tools/dependencies/files, inefficient layer caching, and including other full images (like Python in this case). Dispanshu focuses on achieving the solution in 5 parts: 

  1. Multi-stage builds 
  1. Layer optimizations 
  1. Minimal base images (including Scratch) 
  1. Advanced techniques like distroless images 
  1. Security best practices 

Using these techniques, the image receives a size reduction from 1.2GB to 8MB! The most impactful change being multi-stage builds to which the writer accredits over 90% of this size reduction. I have never used these techniques before, but my interest definitely peaked when I saw the massive size reduction that resulted from these changes.  

The multi-stage builds technique references the build stage and the production stage. By using this technique build-time dependencies are separated from the actual runtime environment which avoids the inclusion of any unnecessary files or tools in the resulting image. Another technique recommends minimal base images using the slim or alpine version (for Python) over the full version for the build stage and for production stage it is recommended to use the scratch base image (no OS, no dependencies, no data or apps). Using a scratch image has pros and cons, but when we are considering image sizes and optimization this is an ideal route. 

Another interesting piece of this article is the information relating to advanced techniques like distroless images, using Docker Buildkit, using .dockerignore file, and eliminating any excess files. The way that distroless images are explained by the writer makes the concept and the use case very clear. The high-level differences between the Full Distribution Image, the Scratch Image, and the Distroless Image are described as the different ways we can pack for a trip:  

  1. Pack your entire wardrobe (Full Distribution Image) 
  1. Pack nothing and buy everything at your destination (Scratch Image) 
  1. Pack only what you’ll actually need (Distroless Image) 

The analogy makes understanding the relationship between these three image options seemingly obvious, but I can imagine applying any of these techniques described would require some perseverance. This article describes an architecture that juggles simplicity, performance, cost, and security with very impactful results. The results of this article are proof of the value these techniques can provide, and I will be seeking to apply them in my future work.

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.

The Best Linux Distro to Learn to Become a Hacker

This week, I will be talking about the “best” Linux Distro you should learn in order to get the most out of your hacking. This topic is widely debated and you will get many different answers from asking around, with people claiming different distros to be the “best.” Although that title can be slightly arbitrary, there are some specific distributions of Linux that are objectively better or overall more suited for hacking.

Kali Linux is often regarded as one of, if not the best distros to learn for hacking, because it was specifically designed for digital forensics and penetration testing. Developed by Offensive Security, this Debian-based distro remains a favorite among coders and hackers, and comes loaded with security testing tools, powerful programs, and applications that make life easier for people who want to become a hacker (or are one already). Although it can be a bit overwhelming, Kali is extremely helpful for beginners, because all of these tools are laid out for you and it helps you learn right away how to use them and what their capabilities are.

In the podcast, John mentions that he typically uses Ubuntu, and that he has people who ask, “John, why are you using Ubuntu when you could have been using Kali or Parrot OS?” He responds, “I think it’s really valuable to learn how to install those tools, learn how to configure those tools, watch, and see them break–because then you’ll be able to figure out how to fix them, and you’ll be able to troubleshoot and understand what are the packages, what are the repositories, how does this all work within Linux.” He believes that getting through the learning curve is worth it because it will ultimately be good for your own learning and growth. At the end of the day, each distribution of Linux is going to have its own strengths and weaknesses, and will be a little different from each other. Having knowledge and experience about these tools will allow you to use them when solving problems and will make you a better hacker.

I have never experimented with Kali Linux before, but I do have some experience when it comes to learning about hacking through Linux. There is a special, custom-made distribution of Linux called Tails, and like Kali, it is based on Debian. However, there is a very big difference between these two distros; while Kali seems to focus more on offensive hacking, Tails is more defensive and prioritizes both privacy and security through its unique interface. It is made to be booted as a live DVD or USB and never writes to the hard drive or SSD, instead using RAM, and leaves no digital footprint on the machine (unless told otherwise).

In conclusion, Linux is still somewhat unfamiliar to me, as I only have limited experience. I would like to learn more about Kali Linux in particular, but would also like to explore other distributions and learn about their potential.

Watch it here:
https://www.youtube.com/watch?v=T7AaBcNj-mA

From the blog CS@Worcester – Owen Santos Professional Blog by Owen Santos and used with permission of the author. All other rights reserved by the author.

Managing a product backlog within Scrum

With an honors project coming up for one of my courses I was going to have to learn how to become a single person Scrum team. With the average scrum team being seven to ten people, I knew it was going to be both a strange and difficult task.

I knew my first order of business would be to create a product backlog as I am the product owner (among many other things being the only member of the team). Diving in headfirst, I knew what a product backlog was but not how to set up an effective one.

Thankfully, “A guide on Scrum product backlog” by Brianna Hansen was the perfect blog to stumble across. She eloquently states what a product backlog is, why one should be maintained throughout a project, and how to create a product backlog geared towards success. As an added bonus the end of the blog even provides a platform to create and maintain a product backlog.

As I previously stated, I’ve known what a product backlog is. It’s everything that needs to be done for a product, including maintaining it. As much as a product backlog is a to-do list, one way to increase success is to not overload it. Keep it simple but effective. No one on the Scrum team (in this case me) wants to scroll through a product backlog for hours.

Time management is crucial for a product backlog. Certain items contained in the backlog are going to be more time consuming than others so considering this when putting product backlog items into the sprint backlog is very important to sprint success.

Defining the product vision is one of the major points she gives for maintaining a successful product backlog. This usually involves the whole team getting involved to make sure the vision for the product is shared. While in my case I may be the only member Hansen does give some very important questions for me to ask myself when planning my product and adding items to the backlog.

  • “What problem does the product solve?”
  • “Who are the target users or customers?”
  • “What unique value does the product offer?”

Taking these questions into consideration will help to guide me through this project and help to increase my chances of success.

Finding this blog was incredibly helpful for taking my first steps into trying Scrum firsthand and I intend to use what I learned as I navigate my honors project.

From the blog CS@Worcester – DPCS Blog by Daniel Parker and used with permission of the author. All other rights reserved by the author.